I am a newbie trying to get a Docker image into an AWS container registry. According to the AWS documentation, I enter credentials into the AWS CLI and then issue the command aws ecr get-login.
This results in the following:
C:\Users\xxx\Desktop>aws ecr get-login --region us-east-1
An error occurred (AccessDeniedException) when calling the GetAuthorizationToken operation:
User: arn:aws:iam::847077264418:user/xxx
is not authorized to perform: ecr:GetAuthorizationToken on resource: *
Clearly this is something in the AWS IAM. How do I fix it?
By default, IAM users don't have permission to create or modify Amazon
ECR resources, or perform tasks using the Amazon ECR API. (This means
that they also can't do so using the Amazon ECR console or the AWS
CLI.) To allow IAM users to create or modify resources and perform
tasks, you must create IAM policies that grant IAM users permission to
use the specific resources and API operations they'll need, and then
attach those policies to the IAM users or groups that require those
permissions.
from http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html
Related
I need to execute Jenkins pipeline in Docker as an agent,
Docker image is located in AWS ECR,
How can I auth over AWS ECR to pull image for agent?
agent {
docker {
alwaysPull true
image '<aws-account-Id>.dkr.ecr.us-west-2.amazonaws.com/<ecr-repo>:<tag>'
registryUrl 'https://<aws-account-Id>.dkr.ecr.us-west-2.amazonaws.com'
registryCredentialsId 'ecr:us-west-2:<Jenkins Credential ID>'
}
}
To use image from AWS ECR repo as agent in jenkins first you need to Add Credentials of Kind AWS Credentials.
Now just use above code to in agent block in your pipeline code.
Make sure to replace
<aws-account> with AWS Account Id.
<ecr-repo> with the ECR repo name
<tag> with ECR image tag you want to use.
<Jenkins Credential ID> with Jenkins credentials Id you got when you save the credentials in Jenkins.
us-west-2 replace with your ecr repo region
You can use https://<jenkins.url>/directive-generator/ to get this code generated for you.
You can try this:
agent {
docker {
label "buildDockerNode"
image "nodejs10-test-v1"
alwaysPull true
registryUrl "*aws_account_id*.dkr.ecr.us-west-2.amazonaws.com/*project*"
registryCredentialsId "ecr:us-west-2:*cred_id*"
}
}
According to this page https://aws.amazon.com/blogs/compute/authenticating-amazon-ecr-repositories-for-docker-cli-with-credential-helper/ something like the following should work:
sh """#!/bin/bash
docker login -u=${USER} -p=${PASS} https://aws_account_id.dkr.ecr.us-east-1.amazonaws.com
"""
Means you need to Authorization token before pulling the image from ECR it's mean you also need to install AWS-CLI on Jenkins server. The best way is to assign role and run the below command in your pipeline to get authorization token, if it is complicated then use ECR plugin below.
Before it can push and pull images Docker client must authenticate to Amazon ECR registries as an AWS user. The AWS CLI get-login command provides you with authentication credentials to pass to Docker. For more information, see Registry Authentication.
use JENKINS/Amazon+ECR
Note: For create token automatically based on AWS registery or you can run in jenkins file this command before pull
$(aws ecr get-login --no-include-email --region us-west-2)
And for go need to execute Jenkins pipeline in Docker as an agent
Prefer this link.
When starting EC2 instances via aws cli I can specify a KmsKeyId for BlockDevices.
When starting an EC2 instance via Cloudformation (either directly or via ASG/LaunchConfiguration) this option does not exist.
How can I encrypt the block devices of my EC2 instances started via Cloudformation with a specific KMS Key?
It looks like the chain is:
Instance > [ BlockDeviceMapping ] > Ebs > KmsKeyId
When ssh into a aws ec2 linux instance, the user is ec2-user by default. Then I need to set aws credentials by writing to ~/.aws/credentials, but got permission denied. I feel that if I use sudo then the credentials file would be owned by root user, as a result my api server can't read from it.
What's the correct approach to set up aws credentials there?
The 'correct' way to setup the credentials, is to assign a role to the ec2 instance when you create it (or assign them after you create it). That role can be created and assigned to the EC2 instance via the AWS console - there is no need to ssh in and create the credentials there.
See: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console | AWS Security Blog
You can create the credentials file locally, then upload to your ec2 instance.
create the credentials file locally
$ vim credentials
upload to your ec2 instance
$ scp /path/credentials username#servername:/path
We have our VPCs not directly connected to internet. So we need CLI --endpoint-url option to send commands to the custom VPC endpoints instead of standard AWS service endpoints
e.g.
aws sns publish --message $MESSAGE --target-arn $SNSTARGET --region $REGION --endpoint-url 'https://vpce-xxxx-xxxxx.sns.ap-southeast-1.vpce.amazonaws.com/'
For autoscaling though:
I can't find any vpc endpoint interface option and the EC2 endpoint is not accepted.
aws autoscaling complete-lifecycle-action --lifecycle-hook-name $LIFECYCLEHOOKNAME --auto-scaling-group-name $ASGNAME --lifecycle-action-result $HOOKRESULT --instance-id $INSTANCEID --region $REGION
Could not connect to the endpoint URL: https://autoscaling.ap-southeast-1.amazonaws.com/
If I try to use the closest endpoint i.e. EC2
aws autoscaling complete-lifecycle-action --lifecycle-hook-name $LIFECYCLEHOOKNAME --auto-scaling-group-name $ASGNAME --lifecycle-action-result $HOOKRESULT --instance-id $INSTANCEID --region $REGION --endpoint-url 'https://vpce-xxxx-xxx.ec2.ap-southeast-1.vpce.amazonaws.com/'
An error occurred (InvalidAction) when calling the CompleteLifecycleAction operation: The action CompleteLifecycleAction is not valid for this web service.
AWS will be adding EC2 autoscaling VPC endpoint in the coming weeks, the rumor is before Re:Invent.
I am going to automate AWS EC2 instance creation. I have a yaml file which built using cloud formation template. I want to know how do i run this using command line interface.
first you have to upload your template to S3.
create bucket
aws s3api create-bucket --bucket cloud-formation-stacks --region us-east-1
upload to S3
aws s3 sync --delete <template> s3://cloud-formation-stacks
create stack
aws cloudformation create-stack --stack-name mystack
--template-url <template url>
--parameters ParameterKey=KeyName,ParameterValue=YOUR_KEY_NAME
add your parameters as shown. (vpc, security group, subnet id, tags etc etc)
OR. you can do this viva AWS management console, Services->Cloudformation and upload your template.