Cannot run `source` in AWS Codebuild - bash

I am using AWS CodeBuild along with Terraform for automated deployment of a Lambda based service. I have a very simple buildscript.yml that accomplishes the following:
Get dependencies
Run Tests
Get AWS credentials and save to file (detailed below)
Source the creds file
Run Terraform
The step "source the creds file" is where I am having my difficulty. I have a simply bash one-liner that grabs the AWS container creds off of curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI and then saves them to a file in the following format:
export AWS_ACCESS_KEY_ID=SOMEACCESSKEY
export AWS_SECRET_ACCESS_KEY=MYSECRETKEY
export AWS_SESSION_TOKEN=MYSESSIONTOKEN
Of course, the obvious step is to simply source this file so that these variables can be added to my environment for Terraform to use. However, when I do source /path/to/creds_file.txt, CodeBuild returns:
[Container] 2017/06/28 18:28:26 Running command source /path/to/creds_file.txt
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: source: not found
I have tried to install source through apt but then I get an error saying that source cannot be found (yes, I've run apt update etc.). I am using a standard Ubuntu image with the Python 2.7 environment for CodeBuild. What can I do to either get Terraform working credentials for source this credentials file in Codebuild.
Thanks!

Try using . instead of source. source is not POSIX compliant. ss64.com/bash/source.html

CodeBuild now supports bash as your default shell. You just need to specify it in your buildspec.yml.
env:
shell: bash
Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax

The AWS CodeBuild images ship with a POSIX compliant shell. You can see what's inside the images here: https://github.com/aws/aws-codebuild-docker-images.
If you're using specific shell features (such as source), it is best to wrap your commands in a script file with a shebang specifying the shell you'd like the commands to execute with, and then execute this script from buildspec.yml.
build-script.sh
#!/bin/bash
<commands>
...
buildspec.yml (snippet)
build:
commands:
- path/to/script/build-script.sh

I had a similar issue. I solved it by calling the script directly via /bin/bash <script>.sh

I don't have enough reputation to comment so here it goes an extension of jeffrey's answer which is on spot.
Just in case if your filename starts with a dot(.), the following will fail
. .filename
You will need to qualify the filename with directory name like
. ./.filename

Related

Using a bash script in AWS CICD Code Pipeline

In my company we have to use AWS native for the CICD process. I'm used to deploying my services via a bash script which makes deployments very flexible. However, I can't seem to find an option in AWS CodePipeline in which I can call on a script. In the build phase I am able to call on a buildspec.yml file, but in the deploy phase I would like to be able to call on a bash script. Does anyone know if that's possible using AWS CodePipeline?
This is what I put in the deployspec.yml to make it call on a bash script:
version: 0.2
env:
shell: bash
phases:
install:
runtime-versions:
python: 3.8
commands:
- source deployspec.sh && create-stack-name && upload-to-s3 && create-cf-stack

How to prepare the shell environment in an image executed by GitLab Runner?

I'm running CI jobs on a self-hosted GitLab instance plus 10 GitLab Runners.
For this minimal example, two Runners are needed:
Admin-01
A shell runner with Docker installed.
It can execute e.g. docker build ... to create new images, which are then pushed to the private Docker registry (also self-hosted / part of the GitLab installation)
Docker-01
A docker runner, which executes the previously build image.
On a normal bare-metal, virtual machine or shell runner, I would modify e.g. ~/.profile to execute commands before before_script or script sections are executed. In my use case I need to set new environment variables and source some configuration files provided by the tools I want to run in an image. Yes, environment variables could be set differently, but there seams to be no way to source Bash scripts automatically before before_script or script sections are executed.
When sourcing the Bash source file manually, it works. I also notice, that I have to source it again in script block. So I assume the Bash session is ended between before_script block to script block. Of cause, it's no nice solution to manually source the tools Bash configuration script in every .gitlab-ci.yml file manually by the image users.
myjobn:
# ...
before_script:
- source /root/profile.additions
- echo "PATH=${PATH}"
# ...
script:
- source /root/profile.additions
- echo "PATH=${PATH}"
# ...
The mentioned modifications for e.g. shell runners does not work in images executed by GitLab Runner. It feels like the Bash in the container is not started as login shell.
The minimal example image is build as follows:
fetch debian:bullseye-slim from Docker Hub
use RUN commands in Dockerfile to modify with some echo outputs
/etc/profile
/root/.bashrc
/root/.profile
# ...
RUN echo "echo GREETINGS FROM /ROOT/PROFILE" >> /root/.profile \
&& echo "echo GREETINGS FROM /ETC/PROFILE" >> /etc/profile \
&& echo "echo GREETINGS FROM /ROOT/BASH_RC" >> /root/.bashrc
When the job starts, non of the echos is printing messages, while a cat shows, the echo commands have been put at the right places while building the image.
At next I tried to modify
SHELL ["/bin/bash", "-l", "-c"]
But I assume, this has only effects in RUN commands in the Dockerfile, but not on an executed container.
CMD ["/bin/bash", "-l"]
I see no behavior changes.
Question:
How to start the Bash in the Docker image managed by GitLab Runner as login shell so it ready configuration scripts?
How to modify the environment in a container before before_script or script runs. Modifying means environment variables and execution / sourcing a configuration script or patched default script like ~/.profile.
How does GitLab Runner execute a job with Docker?
This is not documented by GitLab in the official documentation ...
What I know so far, it jumps between Docker images specified by GitLab and user defined images and shares some directories/volumes or so.
Note:
Yes, the behavior can be achieved with some Docker arguments in docker run, but as I wrote GitLab Runner is managing the container. Alternatively, how to configure, how GitLab Runner launches the images? To my knowledge, there is no configuration option available / documented for this situation.
A shell runner with Docker installed. It can execute e.g. docker build ...
Use docker-in-docker or use kaniko. https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
Shell executor is like "the last resort", where you want specifically to make changes to the server, or you are deploying your application "into" this server.
How to start the Bash in the Docker image managed by GitLab Runner as login shell so it ready configuration scripts?
Add ENTRYPOING bash -l to your image. Or set the entrypoint from gitlab-ci.yml. See docker documentation on ENTRYPOINT and gitlab-ci.yml documentation on image: entrypoint: .
How to modify the environment in a container before before_script or script runs.
Build the image with modified environment. Consult Dockerfile documentation on ENV statements.
Or set the environment from gitlab-ci.yml file. Read documentation on variables: in gitlab-ci.
How to prepare the shell environment in an image executed by GitLab Runner?
Don't. The idea is that the environment is reproducible, ergo, there should be no changes beforehand. Add variables: in gitlab-ci file and use base images if possible.
How does GitLab Runner execute a job with Docker?
This is not documented by GitLab in the official documentation ...
Gitlab is open-source.
What I know so far, it jumps between Docker images specified by GitLab and user defined images and shares some directories/volumes or so.
Yes, first a gitlab-runner-helper is executed - it has git and git-lfs and basically clones the repository and downloads and uploads the artifacts. Then the container specified with image: is run, cloned repository is copied into it and a specially prepared shell script is executed in it.

Codebuild Workflow with environment variables

I have a monolith github project that has multiple different applications that I'd like to integrate with an AWS Codebuild CI/CD workflow. My issue is that if I make a change to one project, I don't want to update the other. Essentially, I want to create a logical fork that deploys differently based on the files changed in a particular commit.
Basically my project repository looks like this:
- API
-node_modules
-package.json
-dist
-src
- REACTAPP
-node_modules
-package.json
-dist
-src
- scripts
- 01_install.sh
- 02_prebuild.sh
- 03_build.sh
- .ebextensions
In terms of Deployment, my API project gets deployed to elastic beanstalk and my REACTAPP gets deployed as static files to S3. I've tried a few things but decided that the only viable approach is to manually perform this deploy step within my own 03_build.sh script - because there's no way to build this dynamically within Codebuild's Deploy step (I could be wrong).
Anyway, my issue is that I essentially need to create a decision tree to determine which project gets excecuted, so if I make a change to API and push, it doesn't automatically deploy REACTAPP to S3 unnecessarliy (and vica versa).
I managed to get this working on localhost by updating environment variables at certain points in the build process and then reading them in separate steps. However this fails on Codedeploy because of permission issues i.e. I don't seem to be able to update env variables from within the CI process itself.
Explicitly, my buildconf.yml looks like this:
version: 0.2
env:
variables:
VARIABLES: 'here'
AWS_ACCESS_KEY_ID: 'XXXX'
AWS_SECRET_ACCESS_KEY: 'XXXX'
AWS_REGION: 'eu-west-1'
AWS_BUCKET: 'mybucket'
phases:
install:
commands:
- sh ./scripts/01_install.sh
pre_build:
commands:
- sh ./scripts/02_prebuild.sh
build:
commands:
- sh ./scripts/03_build.sh
I'm running my own shell scripts to perform some logic and I'm trying to pass variables between scripts: install->prebuild->build
To give one example, here's the 01_install.sh where I diff each project version to determine whether it needs to be updated (excuse any minor errors in bash):
#!/bin/bash
# STAGE 1
# _______________________________________
# API PROJECT INSTALL
# Do if API version was changed in prepush (this is just a sample and I'll likely end up storing the version & previous version within the package.json):
if [[ diff ./api/version.json ./api/old_version.json ]] > /dev/null 2>&1
## then
echo "🤖 Installing dependencies in API folder..."
cd ./api/ && npm install
## Set a variable to be used by the 02_prebuild.sh script
TEST_API="true"
export TEST_API
else
echo "No change to API"
fi
# ______________________________________
# REACTAPP PROJECT INSTALL
# Do if REACTAPP version number has changed (similar to above):
...
Then in my next stage I read these variables to determine whether I should run tests on the project 02_prebuild.sh:
#!/bin/bash
# STAGE 2
# _________________________________
# API PROJECT PRE-BUILD
# Do if install was initiated
if [[ $TEST_API == "true" ]]; then
echo "🤖 Run tests on API project..."
cd ./api/ && npm run tests
echo $TEST_API
BUILD_API="true"
export BUILD_API
else
echo "Don't test API"
fi
# ________________________________
# TODO: Complete for REACTAPP, similar to above
...
In my final script I use the BUILD_API variable to build to the dist folder, then I deploy that to either Elastic Beanstalk (for API) or S3 (for REACTAPP).
When I run this locally it works, however when I run it on Codebuild I get a permissions failure presumably because my bash scripts cannot export ENV_VAR. I'm wondering either if anyone knows how to update ENV_VARIABLES from within the build process itself, or if anyone has a better approach to achieve my goals (conditional/ variable build process on Codebuild)
EDIT:
So an approach that I've managed to get working is instead of using Env variables, I'm creating new files with specific names using fs then reading the contents of the file to make logical decisions. I can access these files from each of the bash scripts so it works pretty elegantly with some automatic cleanup.
I won't edit the original question as it's still an issue and I'd like to know how/ if other people solved this. I'm still playing around with how to actually use the eb deploy and s3 cli commands within the build scripts as codebuild does not seem to come with the eb cli installed and my .ebextensions file does not seem to be honoured.
Source control repos like Github can be configured to send a post event to an API endpoint when you push to a branch. You can consume this post request in lambda through API Gateway. This event data includes which files were modified with the commit. The lambda function can then process this event to figure out what to deploy. If you’re struggling with deploying to your servers from the codebuild container, you might want to try posting an artifact to s3 with an installable package and then have your server grab it from there.

Gitlab CI/CD runner : mvn command not found

Maven is well installed on my gitlab-runner server. When executing mvn clean directly on my repo it works, when running my pipeline using Gitlab UI got this error :
bash: line 60: mvn: command not found
ERROR: Job failed: exit status 1
I notice that I tried to fix the problem by adding the before_script section in the .gitlab-ci.yml file :
before_script:
- export MAVEN_HOME=/usr/local/apache-maven
I add also the line :
environment = ["MAVEN_HOME=/usr/local/apache-maven"]
on the config.toml file.
the problem still persist, my executor is : shell.
Any advice!
I managed to fix the problem using this workaround:
script:
- $MAVEN_HOME/bin/mvn clean
Just add the maven docker image, add below line as first line:
image: maven:latest or image: maven:3-jdk-10 or image: maven:3-jdk-9
refer: https://docs.gitlab.com/ee/ci/examples/artifactory_and_gitlab/
For anyone experiencing similar issues, it might be a good idea to restart the gitlab runner ".\gitlab-runner.exe restart". Especially after fiddling with environmental variables.
There is an easier way:
Making changes in ~/.bash_profile not ~/.bashrc.
According to this document:
.bashrc it is more common to use a non-login shell
This document saying:
For certain executors, the runner passes the --login flag as shown above, which also loads the shell profile.
So it should not be ~/.bashrc, you can also try ~/.profile which It can hold the same configurations, which are then also accessible by other shells
In my scenario I do following things:
1. Set gitlab-runner's user password.
passwd gitlab-runner
2. Login gitlab-runner.
su - gitlab-runner
3. Make changes in .bash_profile
Add maven to PATH:
$ export M2_HOME=/usr/local/apache-maven/apache-maven-3.3.9
$ export M2=$M2_HOME/bin
$ export PATH=$M2:$PATH
You can include these commands in $HOME/.bashrc
I hope you had figure out your question. I met the same question when I build my ci on my server.
I use the shell as the executer for my Runner.
here are the steps to figure out.
1 check the user on the runner server
if you had install maven on the runner server successfully, maybe it just successful for the root, u can check the real user for the ci process.
job1:
stage: test
script: whoami
if my case, it print gitlab-runner, not the root
2 su the real user, check mvn again
In this time, it print error as same as the Gitlab ci UI.
3 install maven for the real user. run the pipeline again.
You can also use as per below in the .gitlab-ci.yml
before_script:
- export PATH=$PATH:/opt/apache-maven-3.8.1/bin

Run PHP script after deployment on AWS Elastic Beanstalk

I've created a PHP script that generates a local.xml file for Magento with the required database settings and credentials. I need to run this after the application is deployed; however I cannot seem to figure out a way to do so. My understanding is that I need to create a .config file inside of a .ebextensions directory. Anyone have a solution?
Technically Josh is not correct. According to the documentation (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-commands): the commands section .. "The commands are processed in alphabetical order by name, and they run before the application and web server are set up and the application version file is extracted."
The closest I am aware of is the "container_commands" section which "The commands in container_commands are processed in alphabetical order by name. They run after the application and web server have been set up and the application version file has been extracted, but before the application version is deployed."
I don't know of a way to truly run a script post deployment (which is why I was here looking for that answer).
Elastic Beanstalk will look files under /opt/elasticbeanstalk/hooks/appdeploy/post directory to run after deployment.
So you can make use of this and do:
commands:
create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/job_after_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
/var/app/current
** run your php script here **
Yup, .ebextensions are what you are looking for. To see how to bundle the source, take a look at the sample applications. There is a PHP one you can look at as well.
For more info on .ebextensions, take a look at this page.
Here's an example of a custom command. This could go in a file called sample.config within the .ebextensions directory:
commands:
success_command:
command: echo "this will be ran after launching"
Be careful if you copy and paste YAML and double check the format. You can also use JSON which follows the similar format.

Resources