AWS Cloudformation - Installing packages with cfn-init - bash

I have created an EC2 instance via cloudformation and I am trying to get it to install postgres on the instance directly via cloudformation. However, when I SSH into my instance and try to run psql via the command line I keep getting:
bash: psql: command not found
I have tried doing it manually, installing postgres with the below command and it works fine.
sudo yum install postgresql postgresql-server postgresql-devel postgresql-contrib postgresql-docs
Could it be that it is because I'm just updating the stack and thus the ec2 instance rather than creating a new one?
Below is a snippet from the cloudformation template. Everything works when I update the template but it seems that postgres still isn't installed...
DbWrapper:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
postgresql: []
postgresql-server: []
postgresql-devel: []
postgresql-contrib: []
postgresql-docs: []
Properties:
ImageId: ami-f976839e #AMI aws linux 2
InstanceType: t2.micro
AvailabilityZone: eu-west-2a
SecurityGroupIds:
- !Ref Ec2SecurityGroup
SubnetId: !Ref SubnetA
KeyName: !Ref KeyPairName
UserData:
Fn::Base64:
!Join [ "", [
"#!/bin/bash -xe\n",
"sudo yum update\n",
"sudo yum install -y aws-cfn-bootstrap\n", #download aws helper scripts
"sudo /opt/aws/bin/cfn-init -v ", #use cfn-init to install packages in cloudformation init
!Sub "--stack ${AWS::StackName} ",
"--resource DbWrapper ",
"--configsets Install ",
!Sub "--region ${AWS::Region} ",
"\n" ] ]

If anyone is encountering the same problem, the solution was indeed that you need to delete the instance and recreate from scratch. Just updating the stack won't work.

A bit late here (I found this searching for another issue), but you can re-run your CF Launch Config with this piece from your snippet:
UserData:
Fn::Base64:
!Join [ "", [
"#!/bin/bash -xe\n",
"sudo yum update\n",
"sudo yum install -y aws-cfn-bootstrap\n", #download aws helper scripts
"sudo /opt/aws/bin/cfn-init -v ", #use cfn-init to install packages in cloudformation init
!Sub "--stack ${AWS::StackName} ",
"--resource DbWrapper ",
"--configsets Install ",
!Sub "--region ${AWS::Region} ",
"\n" ] ]
the /opt/aws/bin/cfn-init command is what's running the metadata config from the launch config you've specified, which is where your packages are defined.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html
The reason deleting the instance and recreating it works is that it re-runs the UserData piece from the EC2 section above.

This is related to you calling cfn-init with --configsets value that you do not have defined. You would need to add the configSets section below to your metadata section:
Metadata:
AWS::CloudFormation::Init:
configSets:
Install:
- "config"
config:
packages:
yum:
postgresql: []
postgresql-server: []
postgresql-devel: []
postgresql-contrib: []
postgresql-docs: []
Otherwise take out --configset from your cfn-init original call.
References:
cfn-init
AWS::CloudFormation::Init

Related

Invalid Layer Arn Error when using ARN value from SSM parameters

Lambda layer ARN is stored in SSM parameter and need to access the value of this parameter to put as an Layer arn while defining a function and attaching a layer to it.
ERROR: SayHelloLayerARN is an Invalid Layer Arn.
Parameter Name in Parameter Store: SayHelloLayerARN
Here is the SAM template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
CREATE-WITH-SSM
Sample SAM Template for CREATE-WITH-SSM
Parameters:
HelloLayerARN:
Type: AWS::SSM::Parameter::Value<String>
Default: SayHelloLayerARN
Description: Layer ARN from SSM
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.8
Environment:
Variables:
LAYER_NAME: !Ref HelloLayerARN
Layers:
- !Ref HelloLayerARN
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
It seems, SAM doesn't resolve SSM parameters.
Please try using --parameter-overrides option
Example: sam build --parameter-overrides HelloLayerARN=LambdaLayerARN
Note: You must change the HelloLayerARN Type to normal String, other sam deploy fails with SSM parameter resolving error.
Parameters:
HelloLayerARN:
Type: String #AWS::SSM::Parameter::Value<String>
Default: SayHelloLayerARN
Description: Layer ARN from SSM
Please refer the known issue: https://github.com/aws/aws-sam-cli/issues/1069
The --parameter-overrides solution mentioned by #user17589914 works for build and deploy but it does not work for local invoke (I will be very happy to be proven wrong). Below are some details on my findings and workaround:
The layers specific issue aside, there is an open issue with inconsistency between --env-vars vs --parameter-overrides for build, deploy and local invoke, just FYI.
https://github.com/aws/aws-sam-cli/issues/1163
So in general, I am using --env-vars for local invoke with dev parameters defined in a json file. And for build & deploy, I use the --parameter-overrides with parameters for multiple envs defined in a samconfig.toml.
For the Layer ARN reference not working issue, I have not been able to get local invoke to work by passing the ARN as a parameter with either --env-vars or --parameter-overrides. So, I ended by leaving the layer ARN hard-coded in my sam template.
Looking forward to see if I am missing something and someone has this working for local invoke as well.
Do you try other SAM CLI version ?
I got the same error message for SAM CLI version 1.21.1 but not 1.29.0. I POC via SAM container image public.ecr.aws/sam/build-nodejs14.x on local machine (macOS) :
#!/bin/sh
# SAM_VERSION=1.21.1
SAM_VERSION=1.29.0
CONTAINER=public.ecr.aws/sam/build-nodejs14.x:$SAM_VERSION
EXEC_DIR=/path/to/sam
TARGET_FUNCTION=YOUR_FUNCTION_NAME
docker run \
--rm -it $CONTAINER \
sam --version
docker run \
--env SAM_CLI_TELEMETRY=0 \
--env-file $EXEC_DIR/.env \
-v $EXEC_DIR/functions:/functions \
--rm -it $CONTAINER \
sh -c "cd /functions/$TARGET_FUNCTION && sam build"
SAM CLI requires AWS credential, you need to provide environment variables below, ex. my .env file :
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_DEFAULT_REGION=YOUR_TARGET_REGION
AWS_REGION=YOUR_TARGET_REGION
and don't forget create IAM that allows iam:ListPolicy :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUsersToPerformUserActions",
"Effect": "Allow",
"Action": [
"iam:ListPolicies"
],
"Resource": "*"
}
]
}
Result

Using aws-cli in cronjob

I'm trying to run aws sts get-caller-identity in a cronjob, however this results in /bin/sh: 1: aws: not found
spec:
containers:
- command:
- /bin/sh
- -c
- aws sts get-caller-identity
As already mentioned in the comments, it seems that the AWS CLI is not installed in the image that your are using for this cronjob. You need to provide more information!
If you are the owner of the used image, just install the AWS CLI within the Dockerfile. If you are not the owner, just create your own image, extend it from the image you are currently using and install the AWS CLI.
For example, if you are using an Alpine based image, just create a Dockerfile
FROM <THE_ORIGINAL_IMAGE>:<TAG>
RUN apk add --no-cache python3 py3-pip && \
pip3 install --upgrade pip && \
pip3 install awscli
Then build the image and push it to DockerHub for an example.
Now you can use this new image in your CronJob resource.
BUT, the next thing is that your CronJob Pod needs access to execute the AWS STS service. There are multiple possibilities to get this done. The best way is to use IRSA (IAM Roles for Service Accounts) Just check this blog article https://aws.amazon.com/de/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/
If you still need help, just provide more details.
Step 1:
You need add secrets key to kubernetes secrets:
kubectl create secret generic aws-credd --from-literal=AWS_SECRET_ACCESS_KEY=xxxxxxxxx --from-literal=AWS_ACCESS_KEY_ID=xxxxx
Step 2: copy this to -> cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: aws-cli-sync
labels:
app: aws-cli-sync
spec:
schedule: "0 17 * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: aws-cli-sync
image: mikesir87/aws-cli
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-cred
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-cred
key: AWS_SECRET_ACCESS_KEY
args:
- /bin/sh
- -c
- date;aws s3 sync s3://xxx-backup-prod s3://elk-xxx-backup
restartPolicy: Never
Step 3: add job in namespaces there you add key
kubectl apply -f ./cronjob.yaml

Can someone look at my yaml file for code deployment using Bitbucket Pipelines?

This is my first attempt at setting up pipelines or even using any CI/CD tool. So, reading the documentation at Bitbucket, I added the bitbucket-pipelines.yml file in the root of my Laravel application for a build. Here is the file.
image: php:7.4-fpm
pipelines:
default:
- step:
name: Build and test
caches:
- composer
script:
- apt-get update && apt-get install -qy git curl libmcrypt-dev mariadb-client ghostscript
- yes | pecl install mcrypt-1.0.3
- docker-php-ext-install pdo_mysql bcmath exif
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --file name=composer
- composer install
- ln -f -s .env.pipelines .env
- php artisan migrate
- ./vendor/bin/phpunit
services:
- mysql
- redis
definitions:
services:
mysql:
image: mysql:5.7
environment:
MYSQL_DATABASE: "laravel-pipeline"
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_USER: "homestead"
MYSQL_PASSWORD: "secret"
redis:
image: redis
The above works fine in building the application, running tests,etc. But when I add the below to deploy, using the scp pipe, I get a notice saying either I need to include an image or at times the notice says there is a bad indentation of a mapping entry.
- step:
name: Deploy to test
deployment: test
# trigger: manual # Uncomment to make this a manual deployment.
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'
I don't really know yaml, and this is my first time working with a CI/CD tool so I am lost. Can someone guide me in what I am doing wrong?
Your indentation for name and deployment is not the same as for the script. Try putting it all on the same indentation like this.
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'

Having CloudFormation wait for the user data

I have a cloudformation stack which creates a EC2 instance and install something in it using UserData. Cloudformation immediately reports CREATE_COMPLETE upon creation of the EC2 instance based on RedHat. But at this point, the instance is not really usable since the userdata takes about 40 min to finish. I read through documentation and even tried cfn-signal but I could not successfully execute it.
Can someone tell me how exactly it has to be done?
EC2Instance:
Type: AWS::EC2::Instance
Properties:
CreditSpecification:
CPUCredits: standard
IamInstanceProfile:
Fn::ImportValue:
!Sub ${InstanceProfileStackName}-instanceProfile
ImageId: !Ref ImageId
InstanceInitiatedShutdownBehavior: stop
InstanceType: !Ref InstanceType
SubnetId: !Ref SubnetId
SecurityGroupIds:
- !Ref DefaultSecurityGroup
- !Ref WebSecurityGroup
UserData:
Fn::Base64: !Sub |
#!/bin/bash
set -e
yum update -y
The above is the truncated part of my Cloudformation template.
UPDATE
I have the script which has the following line
source scl_source enable rh-python36
The default of my instance is python2.7 but I had to install my pip packages with python3.6. I am not sure if that was making the cfn-signal fail.
The script is going till the final step and seems to fail there. I am creating a recordset from the EC2 IP but Cloudformation still thinks the EC2 instance is not done and waiting till the timeout.
Screenshot of the instance snapshot
Log file end is as follows
Also my log file is named /var/log/cloud-init.log. There was no cloud-init-output.log in that directory.
You need two components:
CreationPolicy so that CFN waits for a SUCCESS signal from the instance.
cfn-signal helper script to perform the signalling action.
Thus your template could be modified as follows for Redhat 8:
EC2Instance:
Type: AWS::EC2::Instance
CreationPolicy: # <--- creation policy with timeout of 5 minutes
ResourceSignal:
Timeout: PT5M
Properties:
CreditSpecification:
CPUCredits: standard
IamInstanceProfile:
Fn::ImportValue:
!Sub ${InstanceProfileStackName}-instanceProfile
ImageId: !Ref ImageId
InstanceInitiatedShutdownBehavior: stop
InstanceType: !Ref InstanceType
SubnetId: !Ref SubnetId
SecurityGroupIds:
- !Ref DefaultSecurityGroup
- !Ref WebSecurityGroup
UserData:
Fn::Base64: !Sub |
#!/bin/bash -x
yum update -y
yum -y install python2-pip
pip2 install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
python2 /usr/bin/cfn-signal -e $? \
--stack ${AWS::StackName} \
--resource EC2Instance \
--region ${AWS::Region}
For debugging, as the user data may error out, have to login to the instance and check /var/log/cloud-init-output.log file
I could recreate your error and fixed here. Here is the corrected template. I added to the answer from Marcin
EC2Instance:
Type: AWS::EC2::Instance
CreationPolicy:
ResourceSignal:
Timeout: PT5M # Specify the time here
Properties:
CreditSpecification:
CPUCredits: standard
IamInstanceProfile:
Fn::ImportValue:
!Sub ${InstanceProfileStackName}-instanceProfile
ImageId: !Ref ImageId
InstanceInitiatedShutdownBehavior: stop
InstanceType: !Ref InstanceType
SubnetId: !Ref SubnetId
SecurityGroupIds:
- !Ref DefaultSecurityGroup
- !Ref WebSecurityGroup
UserData:
Fn::Base64: !Sub |
#!/bin/bash -ex
yum update -y
source scl_source enable rh-python36
<Your additional commands>
cfn-signal -e $? --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
You might want to counter check the indentation before trying.

Unable to start MySQL service in docker during gitlab-ci

I have the following .gitlab-ci taken from the example of Laravel Dusk CI:
stages:
- build
- test
# Variables
variables:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DATABASE: test
DB_HOST: mysql
DB_CONNECTION: mysql
build:
stage: build
services:
- mysql:5.7
image: chilio/laravel-dusk-ci:stable
script:
- composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
# - npm install # if you need to install additional modules from your projects package.json
# - npm run dev # if you need to run dev scripts for example laravel mix
cache:
key: ${CI_COMMIT_REF_NAME}
paths:
# these are only examples, you should modify them according to your project,
# or remove cache routines entirely, if they are causing any problems on your next builds..
# below are 2 safe ones if you use composer install and npm install in your stage script
- vendor
- node_modules
# - /resources/assets/vendors # for example if you put your vendor node-libraries there
test:
stage: test
cache:
key: ${CI_COMMIT_REF_NAME}
paths:
- vendor
- node_modules
policy: pull
services:
- mysql:5.7
image: chilio/laravel-dusk-ci:stable
script:
- cp .env.example .env
# - cp phpunit.xml.ci phpunit.xml # if you are using custom config for your phpunit tests in CI
- configure-laravel
- start-nginx-ci-project
- ./vendor/phpunit/phpunit/phpunit -v --coverage-text --colors --stderr
# - phpunit -v --coverage-text --colors --stderr # if you want to use preinstalled phpunit
- php artisan dusk --colors --debug
artifacts:
paths:
- ./storage/logs # for debugging
- ./tests/Browser/screenshots
- ./tests/Browser/console
expire_in: 7 days
when: always
However, when the runner executes the job, I keep getting the following warning:
Using Docker executor with image chilio/laravel-dusk-ci:stable ...
Starting service mysql:5.7 ...
Pulling docker image mysql:5.7 ...
Using docker image sha256:66bc0f66b7af6ba3ea96582685d3afcd6dff93c2f8999da0ffadd67b280db548 for mysql:5.7 ...
Waiting for services to be up and running...
*** WARNING: Service runner-237f18d2-project-23-concurrent-0-mysql-0 probably didn't start properly.
Health check error:
ContainerStart: Error response from daemon: Cannot link to a non running container: /runner-237f18d2-project-23-concurrent-0-mysql-0 AS /runner-237f18d2-project-23-concurrent-0-mysql-0-wait-for-service/service
Service container logs:
2018-07-11T19:49:03.214991318Z
2018-07-11T19:49:03.215062485Z ERROR: mysqld failed while attempting to check config
2018-07-11T19:49:03.215067480Z command was: "mysqld --verbose --help"
2018-07-11T19:49:03.215070774Z
2018-07-11T19:49:03.215073778Z mysqld: error while loading shared libraries: libpthread.so.0: cannot stat shared object: Permission denied
I've tried to set the runner to privileged in the config.toml:
privileged = true
To solve the Question:
mysqld: error while loading shared libraries: libpthread.so.0: cannot stat shared object: Permission denied
Step1: update your software and kernel(maybe):
apt-get update && apt-get upgrade
Step2: install the docker dependency package:
(ubuntu/debian): apt-get install apt-transport-https ca-certificates curl gnupg2 software properties-common
(centos/redhat):yum-utils device-mapper-persistent-data lvm2
Step3: reboot your server & restart your docker-ce:
reboot
systemctl restart docker-ce

Resources