Jenkins pipeline Docker agent from AWS ECR - jenkins-pipeline

I need to execute Jenkins pipeline in Docker as an agent,
Docker image is located in AWS ECR,
How can I auth over AWS ECR to pull image for agent?

agent {
docker {
alwaysPull true
image '<aws-account-Id>.dkr.ecr.us-west-2.amazonaws.com/<ecr-repo>:<tag>'
registryUrl 'https://<aws-account-Id>.dkr.ecr.us-west-2.amazonaws.com'
registryCredentialsId 'ecr:us-west-2:<Jenkins Credential ID>'
}
}
To use image from AWS ECR repo as agent in jenkins first you need to Add Credentials of Kind AWS Credentials.
Now just use above code to in agent block in your pipeline code.
Make sure to replace
<aws-account> with AWS Account Id.
<ecr-repo> with the ECR repo name
<tag> with ECR image tag you want to use.
<Jenkins Credential ID> with Jenkins credentials Id you got when you save the credentials in Jenkins.
us-west-2 replace with your ecr repo region
You can use https://<jenkins.url>/directive-generator/ to get this code generated for you.

You can try this:
agent {
docker {
label "buildDockerNode"
image "nodejs10-test-v1"
alwaysPull true
registryUrl "*aws_account_id*.dkr.ecr.us-west-2.amazonaws.com/*project*"
registryCredentialsId "ecr:us-west-2:*cred_id*"
}
}

According to this page https://aws.amazon.com/blogs/compute/authenticating-amazon-ecr-repositories-for-docker-cli-with-credential-helper/ something like the following should work:
sh """#!/bin/bash
docker login -u=${USER} -p=${PASS} https://aws_account_id.dkr.ecr.us-east-1.amazonaws.com
"""

Means you need to Authorization token before pulling the image from ECR it's mean you also need to install AWS-CLI on Jenkins server. The best way is to assign role and run the below command in your pipeline to get authorization token, if it is complicated then use ECR plugin below.
Before it can push and pull images Docker client must authenticate to Amazon ECR registries as an AWS user. The AWS CLI get-login command provides you with authentication credentials to pass to Docker. For more information, see Registry Authentication.
use JENKINS/Amazon+ECR
Note: For create token automatically based on AWS registery or you can run in jenkins file this command before pull
$(aws ecr get-login --no-include-email --region us-west-2)
And for go need to execute Jenkins pipeline in Docker as an agent
Prefer this link.

Related

Encrypt using ansible-vault with packer

I have jenkins pipeline which does:
jenkins --> packer --> ansible configs--> create AWS AMI .
In ansible var files, i have artifactory api key. When i create jenkins pipeline how i can encrypt this?

Automate AWS EC2 creation using yaml and cloudformation

I am going to automate AWS EC2 instance creation. I have a yaml file which built using cloud formation template. I want to know how do i run this using command line interface.
first you have to upload your template to S3.
create bucket
aws s3api create-bucket --bucket cloud-formation-stacks --region us-east-1
upload to S3
aws s3 sync --delete <template> s3://cloud-formation-stacks
create stack
aws cloudformation create-stack --stack-name mystack
--template-url <template url>
--parameters ParameterKey=KeyName,ParameterValue=YOUR_KEY_NAME
add your parameters as shown. (vpc, security group, subnet id, tags etc etc)
OR. you can do this viva AWS management console, Services->Cloudformation and upload your template.

How to connect to kubernetes cluster locally and open dashboard?

I have a new laptop and kubernetes cluster running on Google Cloud Platform. How can I access that cluster from local machine to execute kubectl commands, open dashboard etc?
That is not clearly stated in the documentation.
From your local workstation, you need to have the gcloud tool installed and properly configured to connect to the correct GCE account. Then you can run:
gcloud container clusters get-credentials [CLUSTER_NAME]
This will setup kubectl to connect to your kubernetes cluster.
Of course you'll need to install kubectl either using gcloud with:
gcloud components install kubectl
Or using specific instructions for your operating system.
Please check the following link for more details: https://cloud.google.com/kubernetes-engine/docs/quickstart
Once you have kubectl access you can deploy and access the kubernetes dashboard as described here: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
The first thing you would need to do once you've installed Cloud SDK is ensure it is authenticated to your Google Cloud Platform account/project. To do this you need to run:
gcloud auth login
And then follow the on screen instructions.
Also you will need to install kubectl to access/control aspests of your cluster:
gcloud components install kubectl
You can also install it through native package management by following the instructions here.
Once your gcloud is authenticated to your project you can run this to ensure kubectl is pointing at your cluster and authenticated:
gcloud container clusters get-credentials CLUSTER_NAME --zone ZONE
You'll now be able to issue commands with kubectl that target the cluster you defined in the previous step.
You can access the dashboard following the instructions here.

Container credentials access denied exception

I am a newbie trying to get a Docker image into an AWS container registry. According to the AWS documentation, I enter credentials into the AWS CLI and then issue the command aws ecr get-login.
This results in the following:
C:\Users\xxx\Desktop>aws ecr get-login --region us-east-1
An error occurred (AccessDeniedException) when calling the GetAuthorizationToken operation:
User: arn:aws:iam::847077264418:user/xxx
is not authorized to perform: ecr:GetAuthorizationToken on resource: *
Clearly this is something in the AWS IAM. How do I fix it?
By default, IAM users don't have permission to create or modify Amazon
ECR resources, or perform tasks using the Amazon ECR API. (This means
that they also can't do so using the Amazon ECR console or the AWS
CLI.) To allow IAM users to create or modify resources and perform
tasks, you must create IAM policies that grant IAM users permission to
use the specific resources and API operations they'll need, and then
attach those policies to the IAM users or groups that require those
permissions.
from http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html

AWS Configure Bash One Liner

Can anybody tell me how to automate the aws configure in bash with a one liner?
Example:
$ aws configure --profile user2
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: text
Application: I want to automate this inside a Docker Entrypoint!
If you run aws configure set help you will see that you can supply settings individually on the command line and they will be written to the relevant credentials or config file. For example:
aws configure set aws_access_key_id AKIAI44QH8DHBEXAMPLE
You can also run this interactively to modify the default credentials:
aws configure
Or run it interactively to create/modify a named profile:
aws configure --profile qa
Note: with the first technique above, whatever command you type will appear in your history and this is not a good thing for passwords, secret keys etc. So in that case, use an alternative that does not cause the secret parameter to be logged to history, or prevent the entire command being logged to history.
One liner
aws configure set aws_access_key_id "AKIAI44QH8DHBEXAMPLE" --profile user2 && aws configure set aws_secret_access_key "je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY" --profile user2 && aws configure set region "us-east-1" --profile user2 && aws configure set output "text" --profile user2
Note: setting region is optional (also never set it with an empty string if you don't have any region, or it will be buggy); as well as the user profile, if you don't set it it will go under default settings.
👍 Better practice with Secrets
Use secrets, then use associated environment variables:
aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" --profile user2 && aws configure set aws_secret_access_key "$AWS_ACCESS_KEY_SECRET" --profile user2 && aws configure set region "$AWS_REGION" --profile user2 && aws configure set output "text" --profile user2
📖 To know more
Run aws configure set help to get command line options.
Documentation for aws configure set.
Documentation for secrets: Docker, Kubernetes, GitLab.
If you want to automate you should use files rather than CLI. Your CLI only write those files.
➜ cat ~/.aws/config
[profile_1]
output = json
region = eu-west-1
[profile_2]
output = json
region = eu-west-1
➜ cat ~/.aws/credentials
[profile_1]
aws_access_key_id =
aws_secret_access_key =
[profile_2]
aws_access_key_id =
aws_secret_access_key =
For those inclined to use bash, the following works quite well and keeps secrets out of your scripts. In addition, it will also save your input to a named profile in one go.
printf "%s\n%s\nus-east-1\njson" "$KEY_ID" "$SECRET_KEY" | aws configure --profile my-profile
I think this is the answer in one line
aws configure set aws_access_key_id $YOUR_ACCESS_KEY_ID; aws configure set aws_secret_access_key $YOUR_SECRET_ACCESS_KEY; aws configure set default.region $YOUR_AWS_DEFAULT_REGION
One liner
aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" --profile profile_name_here && aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY" --profile profile_name_here && aws configure set region "$AWS_REGION" --profile profile_name_here && aws configure set output "json" --profile profile_name_here
Setting individual configuration
profile_name_here is the aws profile name to be saved to your aws config. Replace it with your own.
ACCESS KEY
aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" --profile profile_name_here
SECRET ACCESS KEY
aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY" --profile profile_name_here
REGION
aws configure set region "$AWS_REGION" --profile profile_name_here
OUTPUT
aws configure set output "json" --profile profile_name_here
The value specified here is json but you can replace it from the list of supported output formats from aws docs.
json
yaml
yaml-stream
text
table
Note:
That $AWS_ACCESS_KEY_ID, $AWS_SECRET_ACCESS_KEY and $AWS_REGION are variables from your AWS credentials file or environment variables if you are using CI. You can also replace them using regular strings value but that is not safe.
Building upon the suggestion by Tom in jarmod's answer, to "configure your keys in a config file that you then share with your docker container instead".
I found that slightly confusing as I'm new to using Docker and awscli.
Also, I believe most who end up at this question are similarly trying to use Docker and awscli together.
So what you'd want to do, step by step is:
Create a credentials file containing
[default]
aws_access_key_id = default_access_key
aws_secret_access_key = default_secret_key
that you copy to ~/.aws/credentials, using a line in Dockerfile like
COPY credentials /root/.aws/credentials
and a config file containing
[default]
region = us-west-2
output = table
that you copy to ~/.aws/config, using a line in Dockerfile like
COPY config /root/.aws/config
Reference:
aws configure set help

Resources