How to hide password in shell script duing ARM template deployment - shell

I am using ARM template to deploy Azure VM.
template.json
"adminPassword": {
"type": "securestring"
}
parameters.json
"adminPassword": {
"value": null
}
When I use "null" in parameters, It will ask me enter password during deployment.
When I enter the adminPassword during deployment, it is showing as plaintext, But I want it to hidden (such as ******)
How can I achieve this?

As 4c74356b41 said, currently, there is no way to do it directly.
You could use Azure Key Vault to avoid the password showing as plain text.
Key Vault can store two types of information: Keys (Certificates) and Secrets. In your scenario, you need use Secrets. You need create a secrets in KeyVault.
##get password don't show plaintext
read -s password
azure keyvault create --vault-name 'ContosoKeyVault' --resource-group 'ContosoResourceGroup' --location 'East Asia'
azure keyvault secret set --vault-name 'ContosoKeyVault' --secret-name 'SQLPassword' --value "$password"
After this you'll need to enable the Key Vault for template deployment.You can do this using the following commands:
azure keyvault set-policy --vault-name Contoso --enabled-for-template-deployment true
You need modify your parameter like below:
"adminPassword": {
"reference": {
"keyVault": {
"id": "/subscriptions/<subscription-guid>/resourceGroups/<group name>/providers/Microsoft.KeyVault/vaults/<vault name>"
},
"secretName": "<seccret name>"
}
},
You could refer this template: 101-vm-secure-password.

Related

Serverless Framework - What permissions do I need to use AWS SSM Parameter Store?

I'm opening this question because there seems to be no documentation on this, so I would like to provide the answer after much time wasted in trial and error.
As background, the Serverless framework [allows loading both plaintext & SecureString values from AWS SSM Parameter Store].1
What permissions are needed to access & load these SSM Parameter Store values when performing serverless deploy?
In general, accessing & decrypting AWS SSM parameter store values requires these 3 permissions:
ssm:DescribeParameters
ssm:GetParameters
kms:Decrypt
-
Here's a real world example that only allows access to SSM parameters relating to my lambda functions (distinguished by following a common naming convention/pattern) - it works under the following circumstances:
SecureString values are encrypted with the default AWS SSM encryption key.
All parameters use the following naming convention
a. /${app-name-or-app-namespace}/serverless/${lambda-function-name/then/whatever/else/you/want
b.${lambda-function-name} must begin with sls-
So let's say I have an app called myCoolApp, and a Lambda function called sls-myCoolLambdaFunction. Perhaps I want to save database config values such as username and password.
I would have two SSM parameters created:
/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/username (plaintext)
/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/password (SecureString)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:DescribeParameters"
],
"Resource": [
"arn:aws:ssm:${region-or-wildcard}:${aws-account-id-or-wildcard}:*"
]
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter"
],
"Resource": [
"arn:aws:ssm:${region-or-wildcard}:${aws-account-id-or-wildcard}:parameter/myCoolApp/serverless/sls-*"
]
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:*:${aws-account-id}:key/alias/aws/ssm"
]
}
]
}
Then in my serverless.yml file, I might reference these two SSM values as function level environment variables like so
environment:
DATABASE_USERNAME: ${ssm:/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/username}
DATABASE_PASSWORD: ${ssm:/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/password~true}
Or, even better, if I want to be super dynamic for situations where I have different config values depending on the stage, I can set the environment variables like so
environment:
DATABASE_USERNAME: ${ssm:/myCoolApp/serverless/sls-myCoolLambdaFunction/${self:provider.stage}/database/username}
DATABASE_PASSWORD: ${ssm:/myCoolApp/serverless/sls-myCoolLambdaFunction/${self:provider.stage}/database/password~true}
With this above example, if I had two stages - dev & prod, perhaps I would create the following SSM parameters:
/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/username (plaintext)
/myCoolApp/serverless/sls-myCoolLambdaFunction/dev/database/password (SecureString)
/myCoolApp/serverless/sls-myCoolLambdaFunction/prod/database/username (plaintext)
/myCoolApp/serverless/sls-myCoolLambdaFunction/prod/database/password (SecureString)
I suggest to use AWS SDK to get SSM parameters in code instead of saving in environment file (i.e. .env). It's more secure that way. You need to assign permission to the role you use with action=ssm:GetParameter and resource point to the parameter in the SSM Parameter store. I use serverless framework for deployment. Below is what I have in serverless.yml assumming parameter names with pattern "{stage}-myproject-*" (e.g. dev-myproject-username, qa-myproject-password):
custom:
myStage: ${opt:stage}
provider:
name: aws
runtime: nodejs10.x
stage: ${self:custom.myStage}
region: us-east-1
myAccountId: <aws account id>
iamRoleStatements:
- Effect: Allow
Action:
- ssm:GetParameter
Resource: "arn:aws:ssm:${self:provider.region}:${self:provider.myAccountId}:parameter/${self:provider.stage}-myproject-*"
two useful resources are listed below:
where to save credentials?
wireless framework IAM doc
In the case you are using codebuild in a ci/cd pipeline, dont forget to add the ssm authorization policies to the codebuild service role. (when we are talking about ssm we have to differenciate between secretsmanager and parametstore)

Private docker registry in cloud-init.yaml

I have a private docker registry. Usually I have to login to the client machine and type docker login <private registry url>
I would like to deploy an AWS auto scaling dockerized environment but I am not sure how to ensure the docker commands to pull an image uses our private docker registry only. I was thinking to have it in cloud-init.yaml and setup
- path: /root/.docker/config.json
content: |
{
"auths": {
"<private registry url>": {
"auth": "xxxxxxxxxxxxxxxxxx"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/xyz (xyzzy)"
}
}
Is that the correct approach or is there a better way?
Use an S3 bucket to hold your json configuration for registry configuration and set it up on boot.

Aws Ruby SDK credentials from file

I would like to store my credentials in ~/.aws/credentials and not in environmental variables, but I am struggling.
To load the credentials I use (from here)
credentials = Aws::SharedCredentials.new({region: 'myregion', profile_name: 'myprofile'})
My ~/.aws/credentials is
[myprofile]
AWS_ACCESS_KEY = XXXXXXXXXXXXXXXXXXX
AWS_SECRET_KEY = YYYYYYYYYYYYYYYYYYYYYYYYYYY
My ~/.aws/config is
[myprofile]
output = json
region = myregion
I then define a resource with
aws = Aws::EC2::Resource.new(region: 'eu....', credentials: credentials)
but if I try for example
aws.instances.first
I get the error Error: #<Aws::Errors::MissingCredentialsError: unable to sign request without credentials set>
Everything works if I hard code the keys
According to the source code aws loads credentials automatically only from ENV.
You can create credentials with custom attributes.
credentials = Aws::Credentials.new(AWS_ACCESS_KEY, AWS_SECRET_KEY)
aws = Aws::EC2::Resource.new(region: 'eu-central-1', credentials: credentials)
In your specific case, it seems there is no way to pass custom credentials to SharedCredentials.
If you just do
credentials = Aws::SharedCredentials.new()
it loads the default profile. You should be able to load myprofile by passing in :profile_name as an option.
I don't know if you can also override the region though. You might want to try to loose that option, see how it works.

Mantaining OAuth keys between elastic beanstalk deployment

I have a laravel application run in AWS Elastic Beanstalk environment. I use Laravel Passport to handle the authentication.
Every time I run eb deploy the keys will be deleted, since it is not part of the version control files (included in .gitignore). Thus, I have to manually run php artisan passport:keys in the EC2 instance to generate the keys. But this will make all users need to login again because the old token is now invalid, since it's a new key pair.
What is the best practice to provide a consistent oauth-public and oauth-private key for my configuration?
I am thinking of including the key into the repository, but I believe this is not recommended.
Another way is that I generate the key once, then upload it to S3. Then have a post-deployment script to retrieve the S3.
Is there any better way?
I managed to solve this yesterday, with S3.
Create a private S3 Repository, where you store your sensitive files (oauth-private.key, etc.)
In your .ebextensions directory, you have to create a .config file where you define a Resource(see https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-files - authentication section) - looking like this:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["<BUCKET-NAME>"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
Assuming A) Your S3 Bucket is called <BUCKET-NAME> and B) The IAM instance profile in your ElasticBeanstalk environment is called aws-elasticbeanstalk-ec2-role
Now you have to add the files to a location on the instance, where you can access it, you're free too choose where. In your .config file insert following:
files:
"/etc/keys/oauth-private.key":
mode: "000755"
owner: webapp
group: webapp
authentication: "S3Auth" # Notice, this is the same as specified in the Resources section
source: "https://<BUCKET-NAME>.s3-<REGION>.amazonaws.com/<PATH-TO-THE-FILE-IN-THE-BUCKET>"
Now for this to work, you still need to grant access to the IAM instance profile (aws-elasticbeanstalk-ec2-role), therefore you need to edit the ACL of your Bucket, like this:
{
"Version": "2012-10-17",
"Id": "BeanstalkS3Copy",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ID>:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:ListBucketVersions",
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<BUCKET-NAME>/*"
]
}
]
}
You can find the ARN of the IAM instance profile by going to the IAM Dashboard > Roles > aws-elasticbeanstalk-ec2-role and the copy the Role ARN
In your Laravel application you have to use Passport::loadKeysFrom('/etc/keys')

How to set twitter app consumer credentials in amazon cognito via CLI

Trying out this from the CLI
aws cognito-identity update-identity-pool \
--identity-pool-id MyIdentityPoolId \
--identity-pool-name MyIdentityPoolName \
--allow-unauthenticated-identities \
--supported-login-providers graph.facebook.com=MyFacebookAppId,api.twitter.com=MyTwitterConsumerKey;MyTwitterConsumerSecret \
--region $MyRegion
The CLI response says:
{
"SupportedLoginProviders": {
"api.twitter.com": "MyTwitterConsumerKey",
"graph.facebook.com": "MyFacebookAppId"
},
"AllowUnauthenticatedIdentities": true,
"IdentityPoolName": "MyIdentityPoolName",
"IdentityPoolId": "MyIdentityPoolId"
}
MyTwitterConsumerSecret: command not found
Unlike configuring facebook (which requires only one credential (the FacebookAppId), configuring twitter requires 2 credentials (ConsumerKey and ConsumerSecret).
If i am delimiting the 2 credentials by a semi-colon, looks like only the first part is getting set in the twitter configuration for amazon cognito. Screenshot attached.
What is the format to pass BOTH ConsumerKey and ConsumerSecret for configuring twitter?
I referred to these AWS docs:
Update Identity Pool via CLI
Create Identity Pool via CLI
Configuring Twitter/Digits with Amazon Cognito
Ok. How silly. Simply needed to scope the credentials for --supported-login-providers within double-quotes.
--supported-login-providers graph.facebook.com="MyFacebookAppId",api.twitter.com="MyTwitterConsumerKey;MyTwitterConsumerSecret"
Then it worked.
{
"SupportedLoginProviders": {
"api.twitter.com": "MyTwitterConsumerKey;MyTwitterConsumerSecret",
"graph.facebook.com": "MyFacebookAppId"
},
"AllowUnauthenticatedIdentities": true,
"IdentityPoolName": "MyIdentityPoolName",
"IdentityPoolId": "MyIdentityPoolId"
}
Screenshot attached.

Resources