Create Jenkinsfile from "execute shell" commands - bash

I have a series of Execute shell boxes on a Jenkins build. After 3 days of Google and watching videos I need help. I am more a sysadmin than a coder so I'm having difficulty in creating a Jenkinsfile with the correct options and syntax. Can anyone advise? I need to create a pipeline. Anything in <name> is like that for security reasons, I have the real values in the files.
Execute shell
mkdir -p deploy
Execute shell
cp -R code/api deploy/
cp docker/Dockerfile.dev deploy/
(cd deploy/api/<Name>.<Name>.Web/ && aws s3 cp --recursive --region=eu-west-1 s3://config.<name>/audience-view/atg/dev/API/ .)
Execute shell
cd deploy && docker build -t <name> -f Dockerfile.dev .
Execute shell
aws ecr get-login --region eu-west-1 > docker_login.sh && chmod +x
docker_login.sh && ./docker_login.sh
docker tag <name>:latest 543573289192.dkr.ecr.eu-west-
1.amazonaws.com/<name>:latest
docker push <name>.dkr.ecr.eu-west-1.amazonaws.com/<name>:latest
Execute shell
docker rmi audience-view-dev-api
docker rmi 543573289192.dkr.ecr.eu-west-1.amazonaws.com/<name>:latest
Execute shell
RUNNING_TASKS=$(aws ecs list-tasks --region=eu-west-1 --cluster <name> --family <name> --query 'taskArns')
if [ "$RUNNING_TASKS" != "[]" ]; then
TASK_ARN=$(aws ecs list-tasks --region=eu-west-1 --cluster a<name> --family <name> --query 'taskArns[0]' | sed 's/\"//g')
aws ecs stop-task --region=eu-west-1 --cluster=<name> --task=$TASK_ARN --reason="Deployment from Jenkins"
while [ $RUNNING_TASKS != "[]" ]; do
sleep 5
RUNNING_TASKS=$(aws ecs list-tasks --region=eu-west-1 --cluster <name> --family <name> --query 'taskArns')
done
fi
Execute shell
TASK_ARN=$(aws ecs start-task --region=eu-west-1 --cluster <name> --task-definition <name> --container-instances 5f0c5b75-64a2-45cf-8ced-d6a6d13d2666 --query 'tasks[0].taskArn' | sed 's/arn:aws:ecs:eu-west-1:543573289192:task\///' | sed 's/\"//g')
TASK_STATUS=$(aws ecs describe-tasks --region=eu-west-1 --cluster <name> --tasks $TASK_ARN --query 'tasks[0].lastStatus')
while [ $TASK_STATUS == "PENDING" ]; do
echo $TASK_STATUS
TASK_STATUS=$(aws ecs describe-tasks --region=eu-west-1 --cluster <name> --tasks $TASK_ARN --query 'tasks[0].lastStatus' | sed 's/\"//g')
if [ $TASK_STATUS == "STOPPED" ]; then
echo $(aws ecs describe-tasks --region=eu-west-1 --cluster <name> --tasks $TASK_ARN --query 'tasks[0].containers[0].exitCode')
exit 1
fi
done

Jenkins is best used as the glue to connect all the build pieces together, not the build script itself. As Alfe mentioned, it would be best to have this all in a shell script and then run the shell script with Jenkins.
BUT, if you really want to do this in a Pipeline job, it would look something like this (declarative pipeline):
pipeline {
agent any
stages {
stage('setup') {
steps {
sh "mkdir -p deploy"
}
stage('nextStage') {
steps {
sh """
cp -R code/api deploy/
cp docker/Dockerfile.dev deploy/
(cd deploy/api/<Name>.<Name>.Web/ && aws s3 cp --recursive --region=eu-west-1 s3://config.<name>/audience-view/atg/dev/API/ .)
"""
}
}
stage('anotherStage') {
steps {
echo "repeat for all your shell steps"
}
}
}
}

Related

How to open shell with colours using AWS ECS execute-command?

I'm using execute-command to open a shell in a AWS Fargate container:
aws ecs execute-command --cluster MtStack-MyCluster7G3C63FE-D8338439438C \
--task d5d35723871267123672312a \
--interactive \
--command "/bin/bash"
The does not show any colours. Is there a way to enable colours?

Shell script syntax, escape character

I have a shell script as given below. This script actually add AWS instance in autoscalling scale in protection group. When I run individual commands that went fine. But when I created a shell file and tried to execute same there are error. See below script
set -x
INSTANCE_ID=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)
ASG_NAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" --region us-east-2 | jq '.Tags[] | select(.["Key"] | contains("a:autoscaling:groupName")) | .Value')
ASG_NAME=$(echo $ASG_NAME | tr -d '"')
aws autoscaling set-instance-protection --instance-ids $INSTANCE_ID --auto-scaling-group-name $ASG_NAME --protected-from-scale-in --region us-east-2
error is as given below. I think issue is with second line. It is not able to get ASG_NAME, I tried some of escape character but nothing is working.
+++ wget -q -O - http://169.254.169.254/latest/meta-data/instance-id
++ INSTANCE_ID=i-----
+++ aws ec2 describe-tags --filters Name=resource-id,Values=i------ --region us-east-2
+++ jq '.Tags[] | select(.["Key"] | contains("a:autoscaling:groupName")) | .Value'
++ ASG_NAME=
+++ echo
+++ tr -d '"'
++ ASG_NAME=
++ aws autoscaling set-instance-protection --instance-ids i---- --auto-scaling-group-name --protected-from-scale-in --region us-east-2
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --auto-scaling-group-name: expected one argument
> Blockquote
Solved issue by recommendation of #chepner. Modified second line by
ASG_NAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" --region us-east-2 --query 'Tags[1].Value')

aws ec2 run-instances: script as the plain text is ignored

I'm trying to pass the script as the --user-data parameter.
If the same is run through --user-data file://some_file.sh all works. Also, it works if launch instance through AWS GUI by adding user-data in the correspondent launch configuration box.
My CLI command is
aws ec2 run-instances --image-id ami-0cc0a36f626a4fdf5 --count 1 --instance-type t2.micro --key-name key_name --security-group-ids sg-00000000 --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=some_name}]" --output table --user-data "sudo touch /tmp/install.log && sudo chmod 777 /tmp/install.log && echo $(date) >> /tmp/install.log"
if the same run as a script, it's content formatted as below
#!/bin/bash
sudo touch /tmp/install.log
sudo chmod 777 /tmp/install.log
echo $(date) >> /tmp/install.log
Also, I'd like to mention that I tried to pass string in different formats like :
--user-data echo "some text"
--user-data "command_1\n command_2\n"
--user-data "command_1 && command_2"
--user-data "command_1; command_2;"
--user-data "#!/bin/bash; command_1; command_2;"
User-data after launch is seeing but not executed
$ curl -L http://169.254.169.254/latest/user-data/
The first line must start with #!.
Then, subsequent lines are executed. They must be separated by a proper newline. It looks like \n is not interpreted correctly.
From how to pass in the user-data when launching AWS instances using CLI:
$ aws ec2 run-instances --image-id ami-16d4986e --user-data '#!/bin/bash
> poweroff'
As an experiment, I put this at the end of the run-instances command:
aws ec2 run-instances ... --user-data '#!
echo bar >/tmp/foo
'
When I logged into the instance, I could see the /tmp/foo file.

Stopping task on AWS ECS via CLI (program output as argument input bash)

I'm trying to kill a task in ECS via the CLI.
I can fetch the task name by executing:
aws ecs list-tasks --cluster "my-cluster" --service-name "my-service" | jq .taskArns[0]
which outputs:
"arn:aws:ecs:REGION:ACCOUNT-ID:task/TASK-GUID"
the full ARN of the task as a string (I have a global defaulting output to JSON).
I can kill the task by executing:
aws ecs stop-task --cluster "my-cluster" --task "task-arn"
However when I try and combine it:
aws ecs stop-task --cluster "my-cluster" --task $(aws ecs list-tasks --cluster "my-cluster" --service-name "my-service" | jq .taskArns[0])
I get:
An error occurred (InvalidParameterException) when calling the StopTask operation: taskId longer than 36.
I know this is probably bash program output/argument input interpolation but I've looked that up and cannot get to the bottom of it.
AWS cli essentially has jq built in so a better (simpler) way to query your task arn would be with:
aws ecs list-tasks --cluster "my-cluster" --service "my-service" --output text --query taskArns[0]
Maybe that helps someone:
Killing task with unique task definition name:
OLD_TASK_ID=$(aws ecs list-tasks --cluster ${ecsClusterName} --desired-status RUNNING --family ${nameTaskDefinition} | egrep "task/" | sed -E "s/.*task\/(.*)\"/\1/")
aws ecs stop-task --cluster ${ecsClusterName} --task ${OLD_TASK_ID}
Killing multiple tasks (same task definition name but different task ids):
OLD_TASK_IDS=$(aws ecs list-tasks --cluster ${ecsClusterName} --desired-status RUNNING --family ${nameTaskDefinition} | egrep "task/" | sed -E "s/.*task\/(.*)\"/\1/" | sed -z 's/\n/ /g')
IFS=', ' read -r -a array <<< "$OLD_TASK_IDS"
for element in "${array[#]}"
do
aws ecs stop-task --cluster ${ecsClusterName} --task ${element}
done
One-liner command to stop tasks in cluster/service
for taskarn in $(aws ecs list-tasks --cluster ${YOUR_CLUSTER} --service ${YOUR_SERVICE} --desired-status RUNNING --output text --query 'taskArns'); do aws ecs stop-task --cluster ${YOUR_CLUSTER} --task $taskarn; done;
One-liner version of nathanpecks great answer:
aws ecs stop-task --cluster "my-cluster" --task $(aws ecs list-tasks --cluster "my-cluster" --service "my-service" --output text --query taskArns[0])

Mount a EBS volume (not snapshot) to Elastic Beanstalk EC2

I'm migrating a legacy app to Elastic Beanstalk. It needs persistent storage (for the time being). I want to mount a EBS volume.
I was hoping the following would work in .ebextensions/ebs.config:
commands:
01mkdir:
command: "mkdir /data"
02mount:
command: "mount /dev/sdh /data"
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/sdh=vol-XXXXX
https://blogs.aws.amazon.com/application-management/post/Tx224DU59IG3OR9/Customize-Ephemeral-and-EBS-Volumes-in-Elastic-Beanstalk-Environments
But unfortunately I get the following error "(vol-XXXX) for parameter snapshotId is invalid. Expected: 'snap-...'."
Clearly this method only allows snapshots. Can anyone suggest a fix or an alternative method.
I have found a solution. It could be improved by removing the "sleep 10" but unfortunately that required because aws ec2 attach-volume is async and returns straight away before the attachment takes place.
container_commands:
01mount:
command: "aws ec2 attach-volume --volume-id vol-XXXXXX --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdh"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /data"
test: "[ ! -d /data ]"
04mount:
command: "mount /dev/sdh /data"
test: "! mountpoint -q /dev/sdh"
Note. Ideally it would be run in commands section not container_commands but the environment variables are not set in time.
To add to #Simon's answer (to avoid traps for the unwary):
If the persistent storage being mounted will ultimately be used inside a Docker container (e.g. if you're running Jenkins and want to persist jenkins_home), you need to restart the docker container after running the mount.
You need to have the 'ec2:AttachVolumes' action permitted against both the EC2 instance (or the instance/* ARN) and the volume(s) you want to attach (or the volume/* ARN) in the EB assumed role policy. Without this, the aws ec2 attach-volume command fails.
You need to pass in the --region to the aws ec2 ... command as well (at least, as of this writing)
Alternatively, instead of using an EBS volume, you could consider using an Elastic File System (EFS) Storage. AWS has published a script on how to mount an EFS volume to Elastic Beanstalk EC2 instances, and it can also be attached to multiple EC2 instances simultaneously (which is not possible for EBS).
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/services-efs.html
Here's a config file that you can drop in .ebextensions. You will need to provide the VOLUME_ID that you want to attach. The test commands make it so that attaching and mounting only happens as needed, so that you can eb deploy repeatedly without errors.
container_commands:
00attach:
command: |
export REGION=$(/opt/aws/bin/ec2-metadata -z | awk '{print substr($2, 0, length($2)-1)}')
export INSTANCE_ID=$(/opt/aws/bin/ec2-metadata -i | awk '{print $2}')
export VOLUME_ID=$(aws ec2 describe-volumes --region ${REGION} --output text --filters Name=tag:Name,Values=tf-trading-prod --query 'Volumes[*].VolumeId')
aws ec2 attach-volume --region ${REGION} --device /dev/sdh --instance-id ${INSTANCE_ID} --volume-id ${VOLUME_ID}
aws ec2 wait volume-in-use --region ${REGION} --volume-ids ${VOLUME_ID}
sleep 1
test: "! file -E /dev/xvdh"
01mkfs:
command: "mkfs -t ext3 /dev/xvdh"
test: "file -s /dev/xvdh | awk '{print $2}' | grep -q data"
02mkdir:
command: "mkdir -p /data"
03mount:
command: "mount /dev/xvdh /data"
test: "! mountpoint /data"
Have to use container_commands because when commands are run the source bundle is not fully unpacked yet.
.ebextensions/whatever.config
container_commands:
chmod:
command: chmod +x .platform/hooks/predeploy/mount-volume.sh
Predeploy hooks run after container commands but before the deployment. No need to restart your docker container even if it mounts a directory on the attached ebs volume, because beanstalk spins it up after predeploy hooks complete. You can see it in the logs.
.platform/hooks/predeploy/mount-volume.sh
#!/bin/sh
# Make sure LF line endings are used in the file, otherwise there would be an error saying "file not found".
# All platform hooks run as root user, no need for sudo.
# Before attaching the volume find out the root volume's name, so that we can later use it for filtering purposes.
# -d – to filter out partitions.
# -P – to display the result as key-value pairs.
# -o – to output only the matching part.
# lsblk strips the "/dev/" part
ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*')
aws ec2 attach-volume --volume-id vol-xxx --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf --region us-east-1
# The above command is async, so we need to wait.
aws ec2 wait volume-in-use --volume-ids vol-xxx --region us-east-1
# Now lsblk should show two devices. We figure out which one is non-root by filtering out the stored root volume name.
NON_ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*' | awk -v name="$ROOT_VOLUME_NAME" '$0 !~ name')
FILE_COMMAND_OUTPUT=$(file -s /dev/$NON_ROOT_VOLUME_NAME)
# Create a file system on the non-root device only if there isn't one already, so that we don't accidentally override it.
if test "$FILE_COMMAND_OUTPUT" = "/dev/$NON_ROOT_VOLUME_NAME: data"; then
mkfs -t xfs /dev/$NON_ROOT_VOLUME_NAME
fi
mkdir /data
mount /dev/$NON_ROOT_VOLUME_NAME /data
# Need to make sure that the volume gets mounted after every reboot, because by default only root volume is automatically mounted.
cp /etc/fstab /etc/fstab.orig
NON_ROOT_VOLUME_UUID=$(lsblk -d -P -o +UUID | awk -v name="$NON_ROOT_VOLUME_NAME" '$0 ~ name' | grep -o 'UUID="[-0-9a-z]*"' | grep -o '[-0-9a-z]*')
# We specify 0 to prevent the file system from being dumped, and 2 to indicate that it is a non-root device.
# If you ever boot your instance without this volume attached, the nofail mount option enables the instance to boot
# even if there are errors mounting the volume.
# Debian derivatives, including Ubuntu versions earlier than 16.04, must also add the nobootwait mount option.
echo "UUID=$NON_ROOT_VOLUME_UUID /data xfs defaults,nofail 0 2" | tee -a /etc/fstab
Pretty sure that things that I do with grep and awk could be done in a more concise manner. I'm not great at Linux.
Instance profile should include these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume",
"ec2:DescribeVolumes"
],
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:instance/*"
]
}
]
}
You have to ensure that you deploy ebs volume in the same AZ as beanstalk and that you use SingleInstance deployment. Then if your instance crashes, ASG will terminate it, create another one, and attach the volume to the new instance keeping all the data.
Here it is with missing config:
commands:
01mount:
command: "export AWS_ACCESS_KEY_ID=<replace by your AWS key> && export AWS_SECRET_ACCESS_KEY=<replace by your AWS secret> && aws ec2 attach-volume --volume-id <replace by you volume id> --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/xvdf --region <replace with your region>"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /home/lucene"
test: "[ ! -d /home/lucene ]"
04mount:
command: "mount /dev/xvdf /home/lucene"
test: "! mountpoint -q /dev/xvdf"

Resources