How can I run a small command in a concourse pipeline? - continuous-integration

I basically want to run the npm install and grunt build command within the newly added repo.
inputs:
- name: repo
- path:
run:
path: repo/
args:
- npm install
- grunt build

path: refers to the path in the container to the binary / script to execute.
Check out this example on the Tasks documentation here : https://concourse-ci.org/tasks.html#task-environment
run:
path: sh
args:
- -exc
- |
whoami
env
sh is the program to execute, and args are passed to the sh program

slight variation of Topher Bullock's answer
run:
path: sh
args:
- -exc
- whoami && env
which will run env if only whoami doesn't return error
This will run env even if whoami fails.
run:
path: sh
args:
- -exc
- whoami || env

Related

GitLab CI/CD shows $HOME as null when concatenated with other variable value

I have defined the following stages, environment variable in my .gitlab-ci.yaml script:
stages:
- prepare
- run-test
variables:
MY_TEST_DIR: "$HOME/mytests"
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest
When I run the above, I get the following error even though /home/gitlab-runner/mytests exists:
Running with gitlab-runner 15.2.1 (32fc1585)
on Ubuntu20 sY8v5evy
Resolving secrets
Preparing the "shell" executor
Using Shell executor...
Preparing environment
Running on PCUbuntu...
Getting source from Git repository
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /home/gitlab-runner/tests/sY8v5evy/0/childless/tests/.git/
Checking out cbc73566 as test.1...
Skipping Git submodules setup
Executing "step_script" stage of the job script
$ cd $HOME
/home/gitlab-runner
$ echo "Your test directory is $MY_TEST_DIR"
SDK directory is /mytests
$ cd $MY_TEST_DIR
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Is there something that I'm doing wrong here? Why is $HOME empty/NULL when used along with other variable?
When setting a variable using gitlab-ci variables: directive, $HOME isn't available yet because it's not running in a shell.
$HOME is set by your shell when you start the script (or before_script) part.
If you export it during the script step, it should be available, so :
prepare-scripts:
stage: prepare
before_script:
- cd $HOME
- pwd
script:
- export MY_TEST_DIR="$HOME/mytests"
- echo "Your test directory is $MY_TEST_DIR"
- cd $MY_TEST_DIR
- pwd
when: always
tags:
- ubuntutest

dynamic image name in CloudBuilder

I am building docker images with dynamic tags in CloudBuilder. I would like to be able to then run that same image, but I'm having trouble getting it to work.
Here's what I've got:
steps:
- id: "Store value for docker image tag"
name: ubuntu
entrypoint: bash
args:
- -c
- date +%Y%m%d%H%M%S > /workspace/image_tag.txt
- name: 'gcr.io/cloud-builders/docker'
entrypoint: bash
args: [ '-c', 'docker build -t gcr.io/blah/my_image:$(cat /workspace/image_tag.txt) -f src/Dockerfile ./src' ]
- name: 'gcr.io/cloud-builders/docker'
entrypoint: bash
args: [ '-c', 'docker push gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)' ]
...
- name: 'gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)'
entrypoint: /bin/sh
args:
- -c
# - execute some commands and script within the image...
(gcr.io/blah/my_image is a custom builder)
Obviously, the name 'gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)' does not work, I get an error:
Your build failed to run: generic::invalid_argument: invalid build: invalid build step name "gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)": could not parse reference: gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)
The image gets pushed fine, but I want the a step that runs the image that got pushed earlier. Did I just mess up the syntax? If I can't do it as easily as I want, is there some other way to do this?
If you want to interpret the content, use " instead of ', like that
- name: 'gcr.io/cloud-builders/docker'
entrypoint: bash
args: [ '-c', "docker build -t gcr.io/blah/my_image:$(cat /workspace/image_tag.txt) -f src/Dockerfile ./src" ]
- name: 'gcr.io/cloud-builders/docker'
entrypoint: bash
args: [ '-c', "docker push gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)" ]

Gitlab CI/CD: pass job environment variables to shell script

I have like this job in my gitlab ci cd configuration file:
jobName:
stage: dev
script:
- export
- env
- sshpass -p $SSH_PASSWORD ssh -o StrictHostKeyChecking=no $SSH_LOGIN 'bash -s' < script.sh
when: manual
I tried share/pass current job env vars to my custom bash script file by adding this command in my job:
- export
- env
But my jon can't access (don't see) to job env vars. How I can correctly share job all env vars to bash script?
I believe dotenv might be suitable for this.
job1:
stage: stage1
script:
- export VAR=123
- echo $VAR
- echo "VAR=$VAR" > variables.env
artifacts:
reports:
dotenv: variables.env
job2:
stage: stage2
script:
- echo $VAR
And your VAR should be available in downstream jobs.

Change current directory and stay there [duplicate]

- name: Go to the folder
command: chdir=/opt/tools/temp
When I run my playbook, I get:
TASK: [Go to the folder] *****************************
failed: [host] => {"failed": true, "rc": 256}
msg: no command given
Any help is much appreciated.
There's no concept of current directory in Ansible. You can specify current directory for specific task, like you did in your playbook. The only missing part was the actual command to execute. Try this:
- name: Go to the folder and execute command
command: chdir=/opt/tools/temp ls
This question was in the results for when I was trying to figure out why 'shell' was not respecting my chdir entries when I had to revert to Ansible 1.9. So I will be posting my solution.
I had
- name: task name
shell:
cmd: touch foobar
creates: foobar
chdir: /usr/lib/foobar
It worked with Ansible > 2, but for 1.9 I had to change it to.
- name: task name
shell: touch foobar
args:
creates: foobar
chdir: /usr/lib/foobar
Just wanted to share.
If you need a login console (like for bundler), then you have to do the command like this.
command: bash -lc "cd /path/to/folder && bundle install"
You can change into a directory before running a command with ansible with chdir.
Here's an example I just setup:
- name: Run a pipenv install
environment:
LANG: "en_GB.UTF-8"
command: "pipenv install --dev"
args:
chdir: "{{ dir }}/proj"

./deploy.sh not working on gitlab ci

My problem is the bash script I created got this error "/bin/sh: eval: line 88: ./deploy.sh: not found" on gitlab. Below is my sample script .gitlab-ci.yml.
I suspect that gitlab ci is not supporting bash script.
image: docker:latest
variables:
IMAGE_NAME: registry.gitlab.com/$PROJECT_OWNER/$PROJECT_NAME
DOCKER_DRIVER: overlay
services:
- docker:dind
stages:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker pull $IMAGE_NAME:$CI_BUILD_REF_NAME || true
production-deploy:
stage: deploy
only:
- master#$PROJECT_OWNER/$PROJECT_NAME
script:
- echo "$PRODUCTION_DOCKER_FILE" > Dockerfile
- docker build --cache-from $IMAGE_NAME:$CI_BUILD_REF_NAME -t $IMAGE_NAME:$CI_BUILD_REF_NAME .
- docker push $IMAGE_NAME:$CI_BUILD_REF_NAME
- echo "$PEM_FILE" > deploy.pem
- echo "$PRODUCTION_DEPLOY" > deploy.sh
- chmod 600 deploy.pem
- chmod 700 deploy.sh
- ./deploy.sh
environment:
name: production
url: https://www.example.com
And this also my deploy.sh.
#!/bin/bash
ssh -o StrictHostKeyChecking=no -i deploy.pem ec2-user#targetIPAddress << 'ENDSSH'
// command goes here
ENDSSH
All I want is to execute deploy.sh after docker push but unfortunately got this error about /bin/bash thingy.
I really need your help guys. I will be thankful if you can solve my problem about gitlab ci bash script got error "/bin/sh: eval: line 88: ./deploy.sh: not found".
This is probably related to the fact you are using Docker-in-Docker (docker:dind). Your deploy.sh is requesting /bin/bash as the script executor which is NOT present in that image.
You can test this locally on your computer with Docker:
docker run --rm -it docker:dind bash
It will report an error. So rewrite the first line of deploy.sh to
#!/bin/sh
After fixing that you will run into the problem that the previous answer is addressing: ssh is not installed either. You will need to fix that too!
docker:latest is based on alpine linux which is very minimalistic and does not have a lot installed by default. For example, ssh is not available out of the box, so if you want to use ssh commands you need to install it first. In your before_script, add:
- apk update && apk add openssh
Thanks. This worked for me by adding bash
before_script:
- apk update && apk add bash
Let me know if that still doesn't work for you.

Resources