Setting up credentials for docker and AWS windows - windows

I'm attempting to set up a docker-machine on AWS from my computer and I want to use the ~/.aws/credentials file to connect and get going. I'm struggling to sort this out though. Can I check the structure of the credentials file.
I'm expecting to include the following text:
[default]
aws_access_key_id = key-pair-name-from-ec2-key-pair-list
aws_secret_access_key = <this is the bit I'm struggling with>
For the aws_secret_access_key do I need to include the contents of the .pem file which was downloaded when I created the key-pair, and if so then do I include the start and end comments and do I need to strip out the new lines?
I have tried to strip out the lines and strip out the comments but that didn't work, I have also tried to include just as is and again that didn't work. I've also tried the other option of preserving the new lines but removing the comments and again that didn't work.
Am I using the right secret here or is there something else that I should be doing. Is the [default] the correct thing to use or do I need to use [username]?

Key pairs are used only to connect to EC2 instances. To use AWS API's with CLI or any SDK, you have to obtain access key and secret. You can follow this steps to obtain them for your IAM user: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey
The best practice is to create a new user with only needed access rights and create a key for that user. And never expose AWS credentials to public domain.

Related

Ansible Tower (AWX) - using secure variables in a playbook?

Greeting everyone,
I've recently started messing with Ansible (in particular Ansible Tower).
I ran into an issue using secure values in my playbook, more accurate, I didn't understand how to use it correctly.
Compared to Chef-Infra, you could use data_bags in order to store your secure credentials.
You create a data bag:
knife data bag create testDataBag
You would create a json file for a data bag item:
{
"id": "preproduction",
"user": "user1",
"password": "this-is-a-password"
}
Upload it to the Chef server while encrypting it with a secret file (which exists the target server):
knife data bag from file testDataBag .\testDataBag\preproduction.json --secret-file .\secret-file
and then you can use it in your cookbook:
userinfo = data_bag_item('testDataBag', preproduction)
userinfo['user'] # "user1"
userinfo['password'] # "this-is-a-password"
An example use case - configuring the password for a Linux user.
userinfo = data_bag_item('testDataBag', preproduction)
user "#{userinfo['user']}" do
comment 'A random user'
home "/home/#{userinfo['user']}"
shell '/bin/bash'
password "userinfo['password']"
end
I know this is a lot of information but I just wanted to show how I'm used to use secure credentials.
Back to Ansible, I understood there is an ansible-vault tool which I can used to encrypt a variable file that later can be used in a playbook.
Sadly the only examples I've seen (or maybe I just didn't notice) include only running playbooks from the command line which is not something I do.
I have a playbook in my GIT repository which is connected to a project in my Ansible Tower.
What do I need to do in order to get to the point I can use a variable which contains the password?
Encryption is the same? by using ansible-vault?
Where do I store the encrypted files? (Specifically in Ansible Tower)
How to store the vault passwords (the one you use to decrypt a vault-id)?
How to access them in my playbook?
I've looked into those links but I couldn't find anything interesting:
https://docs.ansible.com/ansible/latest/user_guide/vault.html
https://docs.ansible.com/ansible/latest/user_guide/playbooks_vault.html
https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html#variables-and-vaults
And in the Ansible Tower documentation there is no explanation on how and where to store your vault-ids.
If anymore information is needed please tell me, I'll update my post.
Thanks everyone!
As far as I know you have two options to achieve this in AWX/Tower, depending on where you want those secrets stored.
Creating a vault within your project/GIT repo
Use "ansible-vault create" command and select a password
Save the credentials within the vault in yaml format and commit/push the changes to git
On your playbook add an include_vars to your vault file and commit/push to git
In Tower create a credential, select type=Vault and add the password for your vault
On your Tower template add the credential you created earlier
Use a custom credential type (this will not save the creds in git at all, they will just live on Tower/AWX)
Create a new custom credential type with an injector configuration type of "extra_vars" and the credentials you want to include as variables in your playbook.
Then create a credential based on the new credential type you created in the previous step.
Now assign that credential to your template, and those variables will just be available in your playbook run.
Here are the details on how to create a custom credential type
https://docs.ansible.com/ansible-tower/latest/html/userguide/credential_types.html

Passing secrets to lambda during deployment stage (CodePipeline) with Serverless?

I have a CodePipeline with GitHub as a source, set up. I am trying to, without a success, pass a single secret parameter( in this case a Stripe secret key, currently defined in an .env file -> explaination down below ) to a specific Lambda during a Deployment stage in CodePipeline's execution.
Deployment stage in my case is basically a CodeBuild project that runs the deployment.sh script:
#! /bin/bash
npm install -g serverless#1.60.4
serverless deploy --stage $env -v -r eu-central-1
Explanation:
I've tried doing this with serverless-dotenv-plugin, which serves the purpose when the deployment is done locally, but when it's done trough CodePipeline, it returns an error on lambda's execution, and with a reason:
Since CodePipeline's Source is set to GitHub (.env file is not commited), whenever a change is commited to a git repository, CodePipeline's execution is triggered. By the time it reaches deployment stage, all node modules are installed (serverless-dotenv-plugin along with them) and when serverless deploy --stage $env -v -r eu-central-1 command executes serverless-dotenv-plugin will search for .env file in which my secret is stored, won't find it since there's no .env file because we are out of "local" scope, and when lambda requiring this secret triggers it will throw an error looking like this:
So my question is, is it possible to do it with dotenv/serverless-dotenv-plugin, or should that approach be discarded? Should I maybe use SSM Parameter Store or Secrets Manager? If yes, could someone explain how? :)
So, upon further investigation of this topic I think I have the solution.
SSM Parameter Store vs Secrets Manager is an entirely different topic, but for my purpose, SSM Paremeter Store is a choice that I chose to go along with for this problem. And basically it can be done in 2 ways.
1. Use AWS Parameter Store
Simply by adding a secret in your AWS Parameter Store Console, then referencing the value in your serverless.yml as a Lambda environement variable. Serverless Framework is able to fetch the value from your AWS Parameter Store account on deploy.
provider:
environement:
stripeSecretKey: ${ssm:stripeSecretKey}
Finally, you can reference it in your code just as before:
const stripe = Stripe(process.env.stripeSecretKey);
PROS: This can be used along with a local .env file for both local and remote usage while keeping your Lambda code the same, ie. process.env.stripeSecretKey
CONS: Since the secrets are decrypted and then set as Lambda environment variables on deploy, if you go to your Lambda console, you'll be able to see the secret values in plain text. (Which kinda indicates some security issues)
That brings me to the second way of doing this, which I find more secure and which I ultimately choose:
2. Store in AWS Parameter Store, and decrypt at runtime
To avoid exposing the secrets in plain text in your AWS Lambda Console, you can decrypt them at runtime instead. Here is how:
Add the secrets in your AWS Parameter Store Console just as in the above step.
Change your Lambda code to call the Parameter Store directly and decrypt the value at runtime:
import stripePackage from 'stripe';
const aws = require('aws-sdk');
const ssm = new aws.SSM();
const stripeSecretKey = ssm.getParameter(
{Name: 'stripeSecretKey', WithDecryption: true}
).promise();
const stripe = stripePackage(stripeSecretKey.Parameter.Value);
(Small tip: If your Lambda is defined as async function, make sure to use await keyword before ssm.getParameter(...).promise(); )
PROS: Your secrets are not exposed in plain text at any point.
CONS: Your Lambda code does get a bit more complicated, and there is an added latency since it needs to fetch the value from the store. (But considering it's only one parameter and it's free, it's a good trade-off I guess)
For the conclusion I just want to mention that all this in order to work will require you to tweak your lambda's policy so it can access Systems Manager and your secret that's stored in Parameter Store, but that's easily inspected trough CloudWatch.
Hopefully this helps someone out, happy coding :)

How can I start an instance from the new EC2 CLI?

I have an ec2 micro instance. I can start it from the console, ssh into it (using a .pem file) and visit the website it hosts.
Using the old ec2 CLI, I can start the instance and perform other actions including ssh and website access.
I am having trouble with the new ec2 CLI. When I do "aws ec2 start-instances --instance-ids i-xxx" I get the message "A client error (InvalidInstanceID.NotFound) occurred when calling the StartInstances operation: The instance ID 'i-xxx' does not exist".
Clearly the instance exists, so I don't what the message is really indicating.
Here is some of my thinking:
One difference between the old and new CLI is that the the later used .pem files whereas the new interface uses access key pairs. The instance has an access key pair associated with is, but have tried all the credentials I can find, and none of them change anything).
I tried created an IAM user and a new access key pair for it. The behavior in all cases is unchanged (start from console or old CLI, web access, ssh) but not using the new CLI.
I realize that there is a means for updating the access key pairs by detaching the volume (as described here), but the process looks a little scary to me.
I realize that I can clone another instance from the image, but the image is a little out of date, and I don't want to lose my changes.
Can anyone suggest what the message really means and what I can do to get around the problem?
The problem had to do with credentials. I had the correct environment
variables (AWS_ACCESS_KEY and AWS_SECRET_KEY) set. But they didn't match what was in my .aws/credentials file. That is, despite what it says here, the new CLI worked only when I had the environment variables and
the credentials file correct and in sync.
Configure your aws cli with "aws configure" in new cli instance with region in which your ec2 instance reside. and then try to give the same command. The instance should start.

Does Ansible vault have to use a password to run

I have been looking into Ansible vault but want to check something incase I have missed a crucial point.
Do you have to run the playbook and provide the password. Encrypting the data seems a great idea but if I share the playbook the person running it will require the password. If they have the password then they can decrypt the file and see the data.
I would like to use it to set passwords for files but would like non admins to be able to run the playbook.
Have I missed something. I am struggling to see its worth if this is the case.
Thanks
The purpose of the vault is to keep secrets encrypted "at rest" (eg, in your source control repo, on-disk), so that someone can't learn the secrets by getting ahold of the content. As others have mentioned, if you want to delegate use of the secrets without divulging them, you'll need an intermediary like Tower.
In your case you need something that will be brokering ansible execution. Because like you've said an encryption would be useless if you share the password.
Like it's mentioned in the comment you can use Ansible Tower, or you can try and set a simple http endpoint that will be trigerring ansible based on specified parameters.

Web access amazon ec2 instance command-line

I lost access via ssh to my amazon ec2 instance and I need to access it NOW due to a problem with my service. I was told that there is a way to access the command-line via web with a java applet but I haven't been able to find it.
Is there a way to access the command-line without the .pem file? terminating/rebooting the instance is not feasible.
AFAIK it is not possible - Amazon does not retain private keys and once your instance has been assigned a keypair, it cannot be reassigned.
You could try to create a new instance with a separate keypair and ssh locally between them, but I don't imagine that that is possible.
If it's an EBS-based instance and you were able to stop it, you could mount the EBS volume to a new instance and copy a new key over; however, based on what you said, I don't believe it's possible. You may need to contact Amazon, but even then, there might not be anything that can be done.
Edit: on the same vein as the 2nd line, if you have other user accounts who have valid login shells, and you have sudo access on one of those accounts, you can do the same as mentioned in the last bit, where you generate a new keypair and upload the private key to ~/.ssh/authorized_keys.

Resources