Is the Sample in https://github.com/microsoft/azure-spring-boot Secure? - spring

The readme describes placing a service principle's ID and secret in the properties. Is this not counter to using a key vault to store your secrets? Or am I reading this incorrectly?

Yes, in the sample, it exposes the client id and client secret in the application.properties.
If you want to use the sample in the production environment in azure, the best practice is to use the MSI(managed identity), then it is no need to expose the id and secret in the application.properties, just enable the MSI and add it to the keyvault access policy, it supports the Azure Spring Cloud, App Service, VM.
Reference - https://github.com/microsoft/azure-spring-boot/blob/master/azure-spring-boot-starters/azure-keyvault-secrets-spring-boot-starter/README.md#use-msi--managed-identities

But the real question is what to do if you do not want to use MSI, giving key, tenant & enable = false does not works. As in above reference.

Related

Keep My Env Parameters Secure While Deploying To AWS

I have a project in Laravel 8 and I have some secret env parameters and I do not want to ship them with my application to github. I will deploy my application with github actions to AWS beanstalk. How do I keep all the secrets secure and put them to EC2 instance when all application deployed.
There are multiple ways to do that and you should not send your env file with your application to github.
You can use beanstalk's own parameter store page. However, if you do that another developer who has access to your AWS account can see all the env parameters. It is simple key value store page.
Benastalk Panel -> (Select Your Environment) -> Configuration -> Software
Under the systems manager there is a service called Parameter Store (this is my prefered way)
In here, You can add as much as parameter as you like securely. You can simply add string parameters as well as secure (like password or api keys) strings also integers but string and secure types are my favorites.
You can split all you parameters by path like "APP_NAME/DB_NAME" etc.
You should get all the parameters from Parameter Store to your EC2 instance and put them on newly created .env file.
There is github secrets in github actions and you can put all your secret parameters to github secrets page. You can get all the secrets from github secrets and put your secrets to your application and ship from github to AWS directly.
You can go to settings in your repository and see this page:

Is there a method preventing dynamic ec2 key pairs from being written to tfstate file?

We are starting to use Terraform to build our aws ec2 infrastructure but would like to do this as securely as possible.
Ideally we would like to do is to create a key pair for each Windows ec2 instance dynamically and store the private key in Vault. This is possible, but I cannot think a way of implementing it without having the private key written to the tfstate file. Yes I know I can store the tfstate file in an encrypted s3 bucket but this does not seem like an optimal secure solution.
I am happy to write custom code if needs be to have the key pair generated via another mechanism and the name passed as a variable to Terraform but dont want to if there are other more robust and tested methods out there. I was hoping we could use Vault to do this but on researching it does not look possible.
Has anyone got any experience of doing something similar? Failing that, any suggestions?
The most secure option is to have an arbitrary keypair you destroy the private key for and user_data that joins the instances to a AWS Managed Microsoft AD domain controller. After that you can use conventional AD users, and groups to control access to the instances (but not group policy in any depth, regrettably). You'll need a domain member server to administrate AD at that level of detail.
If you really need to be able to use local admin on these Windows EC2 instances, then you'll need to create the keypair for decrypting the password once manually and then share it securely through a secret or password manager with other admins using something like Vault or 1Password.
I don't see any security advantage to creating a keypair per instance, just considerable added complexity. If you're concerned about exposure, change the administrator passwords after obtaining them and store those in your secret or password manager.
I still advise going with AD if you are using Windows. Windows with AD enables world-leading unified endpoint management and Microsoft has held the lead on that for a very long time.

Is it possible to deploy a Spring Boot app to App Engine and connect to the database?

I feel that I'm going around in circles here, so, please, bear with me. I want to deploy my Spring Boot application to App Engine but unlike the simple sample Google provides, mine, requires a database and that means credentials. I'm running Java 11 on Standard on Google App Engine.
I managed to make my app successfully connect by having this in the application.properties:
spring.datasource.url=jdbc:postgresql://google/recruiters_wtf?cloudSqlInstance=recruiters-wtf:europe-west2:recruiters-wtf&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=the_user&password=monkey123
The problem is that I don't want to commit any credentials to the repository, so, this is not acceptable. I could use an environment variable, but then I'll have to define them in the app.yaml file. I either keep a non-committed app.yaml file that is needed to deploy, which is cumbersome, or I commit it and I'm back at square one, committing credentials to the repository.
Since apparently Google App Engine cannot have environment variables defined in any other way (unlike Heroku), does this mean it's impossible to deploy a Spring Boot app to App Engine and have it connect to the database without using some unsafe/cumbersome practices? I feel I'm missing something here.
Based on my understanding of what you have described, you would like to essentially connect your Spring boot application running on Google App Engine to a database without exposing the sensitive information. If that is the case, I was able to find out that Cloud KMS offers users the ability for secret management. Specifically, applications which require small pieces of sensitive data at build or runtime are referred to as secrets. These secrets can be encrypted and decrypted with a symmetric key. In your case you can store the database credentials as secrets. You may find further details of the process for encrypting / decrypting a secret here.
They are currently three ways to manage secrets:
Storing secrets in code, encrypted with a key from Cloud KMS. This solution is implementing secrets at the application layer.
Storing secrets in a storage bucket in Cloud Storage, encrypted at rest. You can use Cloud Storage: Bucket to store your database
credentials and can also grant that bucket a specific Service Account.
This solution allows for separation of systems. In the case that the
code repository is breached, your secrets would themselves may still
be protected.
Using third-party secret management system.
In terms of storing secrets themselves, I found the follow steps outlined here quite useful for this. This guide walks users through setting up and storing secrets within a Cloud Storage bucket. The secret is encrypted at the application layer with an encryption key from Cloud KMS. Given your use case, this would be a great option as your secret would be stored within a bucket instead of your app.yaml file. Also, the secret being stored in a bucket would grant you the ability to restrict access to it with service account roles.
Essentially, your app will need to perform an API call to Google Cloud Storage in order to download the KMS encrypted file that contains the secret. It would then use the KMS generated key to decrypt the file so that it would be able to read out the password and use it to make a manual connection to the database. Adding these extra steps would be implementing more security layers, which is the entire idea noted in “'Note: Saving credentials in environment variables is convenient, but not secure - consider a more secure solution such as Cloud KMS to help keep secrets safe.'” in the Google example repository for Cloud SQL.
I hope this helps!
Assuming you are OK to use KMS or GCS to get the credentials, you can programatically set them in Spring Boot. See this post
Configure DataSource programmatically in Spring Boot
As you pointed out there's no built-in way to set environment variables in App Engine other than with the app.yaml file. I'm not an expert in Spring Boot but unless you can set/override some hook to initialize env var from Java code prior to application.properties evaluation, you'll need to set these at build time.
Option 1: Using Cloud Build
I know you're not really keen to use Cloud Build but that would be something like this.
First, following the instructions here, (after creating KeyRing and CryptoKey in KMS and grant access to Cloud Build service account) from you terminal encrypt your environment variable using KMS and get back its base64 representation:
echo -n $DB_PASSWORD | gcloud kms encrypt \
--plaintext-file=- \ # - reads from stdin
--ciphertext-file=- \ # - writes to stdout
--location=global \
--keyring=[KEYRING-NAME] \
--key=[KEY-NAME] | base64
Next, let's say you have an app.yaml file like this:
runtime: java11
instance_class: F1
env_variables:
USER: db_user
PASSWORD: db_passwd
create a cloudbuild.yaml file to define your build steps:
steps:
# replace env vars in app.yaml by their values from KMS
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'sed -i "s/TEST/$$PASSWORD/g" src/main/appengine/app.yaml']
secretEnv: ['PASSWORD']
- name: 'gcr.io/cloud-builders/mvn'
args: ['clean']
- name: 'gcr.io/cloud-builders/mvn'
args: ['package']
- name: 'gcr.io/cloud-builders/mvn'
args: ['appengine:deploy']
timeout: '1600s'
secrets:
- kmsKeyName: projects/<PROJECT-ID>/locations/global/keyRings/<KEYRING_NAME>/cryptoKeys/<KEY_NAME>
secretEnv:
PASSWORD: <base64-encoded encrypted password>
timeout: '1600s'
You ould then deploy your app by running the following command:
gcloud builds submit .
The advantage of this method is that your local app.yaml file only contains placehoder values and can be safely committed. Or you can even set this build to trigger automatically every time you commit to a remote repository.
Option 2: Locally with a bash script
Instead of running mvn appengine:deploy to deploy your app, you could create a bash script that would replace the values in app.yaml, deploy the app, and remove the values straight away.. Something like:
#!/bin/bash
sed -i "s/db_passwd/$PASSWORD/g" src/main/appengine/app.yaml'
mvn appengine:deploy
sed -i "s/$PASSWORD/db_passwd/g" src/main/appengine/app.yaml'
and execute that bash script instead of running the maven command.
I would probably suggest the combination of Spring Cloud Config & Google Runtime Configuration API with your Spring Boot App.
Spring Cloud Config is component which is responsible for retrieving the configuration from remote locations and serving that configuration to your Spring Boot during initialization/boot up. The remote locations can be anything. e.g, GIT repository is widely used but for your use case, you can store the configuration in Google Runtime Configuration API.
So a sample flow will be like this.
Your Spring Boot App(with Config Client) --> Spring Cloud Config Server --> Google Runtime Configuration API
This requires for you to bring up a Spring Cloud Config Server as another App in GCP and have communications enabled from many of your Spring Boot Apps to a centralized Config Server which interacts with Google Runtime API.
Some documentation links.
https://cloud.spring.io/spring-cloud-config/reference/html/
https://docs.spring.io/spring-cloud-gcp/docs/1.1.0.M1/reference/html/_spring_cloud_config.html
https://cloud.google.com/deployment-manager/runtime-configurator/reference/rest/
Sample Spring Cloud GCP Config Example.
https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-samples/spring-cloud-gcp-config-sample
You can try to encrypt the password in the application properties. Take a look at http://mbcoder.com/spring-boot-how-to-encrypt-properties-in-application-properties/
You should use GCP's Key Management Service: https://cloud.google.com/kms/
We use a few options:
1 - Without Docker
Can you use this approach via the env or console?
We use (Spring Boot defined) environment variables. This is the default way of doing this:
SPRING_DATASOURCE_USERNAME=myusername
SPRING_DATASOURCE_PASSWORD=mypassword
According to the Spring Boot spec this will overrule any value of application.properties variables. So you can specify default username and passwords for development and have this overruled at (test or) production deploy time.
Another way is documented in this post:
spring.datasource.url = ${OPENSHIFT_MYSQL_DB_HOST}:${OPENSHIFT_MYSQL_DB_PORT}/"nameofDB"
spring.datasource.username = ${OPENSHIFT_MYSQL_DB_USERNAME}
spring.datasource.password = ${OPENSHIFT_MYSQL_DB_PASSWORD}
2 - A Docker like approach via a console
Your question is described in this post. The default solution is working with 'secrets'. They are specifically made for this. You can convert any secret (as a file) to an environment variable in your process of building and deploying your application. This is a simple action that is described in many posts. Look for newer approaches.
I feel like it is easy to do. I used below properties in my spring boot app which connect to google mysql instance which is public one.
spring.datasource.url=jdbc:postgresql://IP:5432/yourdatabasename
spring.datasource.platform = postgres
spring.datasource.username = youruser
spring.datasource.password = yourpass
Rest will be taken care by the JpaRepository. I hope this would help you.
Let me know if any help is needed.

How can I configure application.properties using AWS CodeDeploy and/or CloudFormation?

I have a Spring Web Service deployed on Elastic Beanstalk. I'm using AWS CloudFormation for the infrastructure and I'm using AWS CodePipeline to deploy the web service automatically from merges to the master branch.
Recently I added DynamoDB integration, and I need to configure a couple things in my application.properties. I attempted to use environment variables to configure the application.properties but I hit a wall when trying to set the environment variables from CodeDeploy.
This is my application.properties
amazon.dynamodb.endpoint=${DYNAMODB_ENDPOINT:http://localhost:8000}
amazon.dynamodb.region=${AWS_REGION:default-region}
amazon.dynamodb.accesskey=${DYNAMODB_ACCESS_KEY:TestAccessKey}
amazon.dynamodb.secretkey=${DYNAMODB_SECRET_KEY:TestSecretKey}
spring.data.dynamodb.entity2ddl.auto = create-drop
spring.data.dynamodb.entity2ddl.gsiProjectionType = ALL
spring.data.dynamodb.entity2ddl.readCapacity = 10
spring.data.dynamodb.entity2ddl.writeCapacity = 1
The defaults are for when I'm running a local DynamoDB instance and they work fine. However, I can't figure out how to get CodeDeploy to set environment variables for me, I also considered getting CloudFormation to set the environment variables, but couldn't find how to do that either. I tried manually setting the environment variables in the EC2 instance but that didn't work and isn't the solution I'm looking for as I'm using EB and want this project to use fully automated deployments. Please let me know if this is possible, what the industry standard is for configuring web services, and if I'm misunderstanding either CodeDeploy or CloudFormation.
In general, it is a bad practice to include access and secret keys in any sort of files or in your deployment automation.
Your instance that your application is deployed to should have an instance profile (i.e. IAM Role) attached to it which should have the appropriate DynamoDB permissions you need.
If you have that instance profile attached, the SDK should automatically be able to detect the credentials, region and endpoint is needs to communicate with.
You may need to update the way you are creating your DynamoDB client to just use the defaults.
To setup your development machine with these properties in a way that the AWS SDK can retrieve without explicitly putting them in properties files, you can run the aws configure command of the AWS CLI which should setup your ~/.aws/ folder with information about your region and credentials to use on your dev machine.

How to secure the AWS access key in serverless

I am writing a serverless application which is connected to DynamoDB.
Currently I am reading the access key ID and security access key from a json file.
I am going to use Jenkins for CI and need a way to secure these keys.
What I am going to do is setting the keys as environmental variables and read them in the application. But the problem is I don't know how to set the environmental variables every time a lambda function is started.
I have read there's a way to configure this in serverless.yml file, but don't know how.
How to achieve this?
Don't use environment variables. Use the IAM role that is attached to your lambda function. AWS Lambda assumes the role on your behalf and sets the credentials as environment variables when your function runs. You don't even need to read these variables yourself. All of the AWS SDKs will read these environment variables automatically.
There's a good guide on serverless security, which among other topics, cover this one as well. It's similar to the OWASP top 10:
In general, the best practice would be to use the AWS Secrets Manager, together with SSM parameter store.

Resources