I have a workflow in GitHub that will execute a shell script, and inside this script I need to use gsutil
In my workflow yml-file I have the following steps:
name: Dummy Script
on:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
environment: alfa
env:
_PROJECT_ID: my-project
steps:
- uses: actions/checkout#v2
- name: Set up Cloud SDK for ${{env._PROJECT_ID}}
uses: google-github-actions/setup-gcloud#master
with:
project_id: ${{env._PROJECT_ID}}
service_account_key: ${{ secrets.SA_ALFA }}
export_default_credentials: true
- run: gcloud projects list
- name: Run script.sh
run: |
path="${GITHUB_WORKSPACE}/script.sh"
chmod +x $path
sudo $path
shell: bash
And the script looks like:
#!/bin/bash
apt-get update -y
gcloud projects list
The 2nd step in yml (run: gcloud projects list) works as expected, listing the projects SA_USER have access to.
But when running the script in step 3, I get the following output:
WARNING: Could not open the configuration file: [/root/.config/gcloud/configurations/config_default].
ERROR: (gcloud.projects.list) You do not currently have an active account selected.
Please run:
$ gcloud auth login
to obtain new credentials.
If you have already logged in with a different account:
$ gcloud config set account ACCOUNT
to select an already authenticated account to use.
Error: Process completed with exit code 1.
So my question is:
How can I run a shell script file and pass on the authentication I have for my service account so I can run gcloud commands from a script file?
Due to reasons, it's a requirement that the script file should be able to run locally on developers computers, and from GitHub.
The problem seemed to be that the environment variables were not inherited when running with sudo. There are many ways to work around this, but I was able to confirm that it would run with sudo -E. Of course, if you don't need to run with sudo, you should remove it, but I guess it's necessary.
(The reproduction code was easy for me to reproduce it. Thanks)
Related
Hello I am using kedro (a pipeline tool) and want to use github actions to trigger a kedro command (kedro run) whenever I make a push to my github repo.
Since I have all the data in my local repo, I thought it would make sense to run the kedro command on my local machine.
So my question is, is there a way to trigger a local action using github actions? Using self-hosted runners perhaps?
You can run the kedro pipeline directly in gh provided runner using the steps below. I added a script I used previously in here to run kedro lint with every push.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.7.9
uses: actions/setup-python#v2
with:
python-version: 3.7.9
- uses: actions/cache#v2
with:
path: ${{ env.pythonLocation }}
key: ${{ env.pythonLocation }}-${{ hashFiles('src/requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r src/requirements.txt
- name: Run Kedro Pipeline
run: |
kedro run
That said, I'm also wondering why you would need to run it on every push. Given compute provided by github actions is likely resource constrained running there might not be the best place. You would also need to keep your data within your repo for this approach to work.
Hi #magical_unicorn I think #avan-sh's answer is correct - what I would also add is that we encourage you to not commit data to VCS / Git. There are some technical limitations such as filesize, but more importantly it's not great security practice when working with teams.
Whilst not necessary on all projects - it might be good practice to explore using some sort of cloud storage for your data, decoupling it from the version control system.
Here, I'm trying to add go command when I deploy my app in GitHub Action.
The prompt in github action shows
err: bash: line 15: go: command not found .
*note : I already installed go and the go command works through my ssh connection
I'm expecting the go command works when I deploy it through Github Action using appleboy/ssh-action, how to do that?
edit:
here's my github action script:
- name: Deploy App and Deploy
uses: appleboy/ssh-action#v0.1.2
with:
host: ${{secrets.SSH_HOST}} # IP address of the server you wish to ssh into
key: ${{secrets.SSH_KEY}} # Private or public key of the server
username: ${{ secrets.SSH_USERNAME }} # User of the server you want to ssh into
script: |
export NVM_DIR=~/.nvm
source ~/.nvm/nvm.sh
export GO_DIR=/usr/local/go
source /usr/local/go/bin/go
cd /root
cd go
cd deploying
echo "Cloning Git Repo to /root/deploying"
git clone https://aldhanekaa:${{secrets.GITHUB_TOKEN}}#github.com/aldhanekaa/Golang-audio-chat.git
echo "Building Golang source"
cd Golang-audio-chat
go build
well for example, for adding npm command on appleboy/ssh-action, we just need to add
export NVM_DIR=~/.nvm
source ~/.nvm/nvm.sh
but how about go?
As user VonC said, I can try by points the binary file of go command, but since /usr/local/go/bin/go is not short as go, I decided to add the go binary to $PATH.
So the solution comes up as;
adding PATH="/usr/local/go/bin/:$PATH" at the first execution of the github action appleboy/ssh-action script.
- name: Deploy App and Deploy
uses: appleboy/ssh-action#v0.1.2
with:
host: ${{secrets.SSH_HOST}} # IP address of the server you wish to ssh into
key: ${{secrets.SSH_KEY}} # Private or public key of the server
username: ${{ secrets.SSH_USERNAME }} # User of the server you want to ssh into
script: |
export NVM_DIR=~/.nvm
source ~/.nvm/nvm.sh
PATH="/usr/local/go/bin/:$PATH"
Check first your PATH:
echo $PATH
If /usr/local/go/bin/ is not part of it, try:
/usr/local/go/bin/go build
I am looking for a way to clean up the runner after a job has been cancelled in GitLab. The reason is we often have to cancel running jobs because the runner is sometimes stuck in the test pipeline and I can imagine a couple of other scenarios where you would want to cancel a job and have a clean up script run after. I am looking for something like after_script but just in the case when a job was cancelled.
I checked the GitLab keyword reference but could not find what I need.
The following part of my gitlab-ci.yaml shows the test stage which I would like to gracefully cancel by calling docker-compose down when the job was cancelled.
I am using a single gitlab-runner. Also, I don't use dind.
test_stage:
stage: test
only:
- master
- tags
- merge_requests
image: registry.gitlab.com/xxxxxxxxxxxxxxxxxxx
variables:
HEADLESS: "true"
script:
- docker login -u="xxxx" -p="${QUAY_IO_PASSWORD}" quay.io
- npm install
- npm run test
- npm install wait-on
- cd example
- docker-compose up --force-recreate --abort-on-container-exit --build traefik frontend &
- cd ..
- apt install -y iproute2
- export DOCKER_HOST_IP=$( /sbin/ip route|awk '/default/ { print $3 }' )
- echo "Waiting for ${DOCKER_HOST_IP}/xxxt"
- ./node_modules/.bin/wait-on "http://${DOCKER_HOST_IP}/xxx" && export BASE_URL=http://${DOCKER_HOST_IP} && npx codeceptjs run-workers 2
- cd example
- docker-compose down
after_script:
- cd example && docker-compose down
artifacts:
when: always
paths:
- /builds/project/tests/output/
retry:
max: 2
when: always
tags: [custom-runner]
Unfortunately this is not currently possible in GitLab. There have been several tickets opened in their repos, with this one being the most up-to-date.
As of the day that I'm posting this (September 27, 2022), there have been at least 14 missed deliverables for this. GitLab continues to say it's coming, but has never delivered it in the six years that this ticket has been open.
There are mitigations as far as automatic job cancelling, but unfortunately that will not help in your case.
Based on your use case, I can think of two different solutions:
Create a wrapper script that detects when parts of your test job are hanging
Set a timeout on the pipeline (In GitLab you can go to Settings -> CI/CD -> General Pipelines -> Timeout)
Neither of these solutions are as robust as if GitLab themselves implemented a solution, but they can at least prevent you from having a job hang for an eternity and clogging up everything else in the pipeline.
Testing the azure devops pipeline on a python project build by conda
jobs:
- job: pre_build_setup
displayName: Pre Build Setup
pool:
vmImage: 'ubuntu-18.04'
steps:
- bash: echo "##vso[task.prependpath]$CONDA/bin"
displayName: Add conda to PATH
- job: build_environment
displayName: Build Environment
dependsOn: pre_build_setup
steps:
- script: conda env create --file environment.yml --name build_env
displayName: Create Anaconda environment
- script: conda env list
displayName: environment installation verification
- job: unit_tests
displayName: Unit Tests
dependsOn: build_environment
strategy:
maxParallel: 2
steps:
- bash: conda activate build_env
The last step - bash: conda activate build_env fails on me with the following error
Script contents:
conda activate build_env
========================== Starting Command Output ===========================
/bin/bash --noprofile --norc /home/vsts/work/_temp/d5af1b5c-9135-4984-ab16-72b82c91c329.sh
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
##[error]Bash exited with code '1'.
Finishing: Bash
How can I active conda? seems the path is wrong so that it is unable to find conda.
CommandNotFoundError: Your shell has not been properly configured to
use 'conda activate'.
Here the issue is your script is run in a sub-shell, but condahasn't been initialized in this sub-shell.
You need change your active script as:
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
eval "$(conda shell.bash hook)"
conda activate build_env
displayName: Active
In addition, please do not split the Add PATH, create environment and active the environment into different jobs.
For Azure devops pipeline, agent job is the basic unit of the pipeline running process and each agent job has its own independent running environment and work logic.
For more detailed, you were using Hosted agent to apply your scripts in this issue scenario.
While there's one agent job starts to run, our pool system will assign an VM to this agent job. And, this VM will be recycled back once the agent job finished. When next agent job start to run , a completely new VM will be randomly reassign.
dependsOn can only share files and pass variables between jobs. It can not keep the VM continued in next job.
I believe you should be able to guess what problem you are going to encounter. Yes, even you can succeed to apply that activate script, you will continue face another error: Could not find conda environment: build_env. That's because the environment which is using by this activate script is a brand new vm, the VM that previous build_environment job used has been recycled by the system.
So, do not split the create environment and activate into 2 agent jobs:
- job: build_environment
displayName: Build Environment
dependsOn: pre_build_setup
steps:
- script: conda env create --file environment.yml --name build_env
displayName: Create Anaconda environment
- script: conda env list
displayName: environment installation verification
- task: Bash#3
inputs:
targetType: 'inline'
script: |
eval "$(conda shell.bash hook)"
conda activate build_env
displayName: Active
There's one more approach proposed by Microsoft which seems to be more robust.
In every step where you want the environment to be activated, you should run
source $CONDA/bin/activate <myEnv>
or just
source activate <myEnv>
if you've already added $CONDA/bin to the PATH variable. You may check the link above to find examples for Ubuntu, macOS and Windows.
In your case it would look as follows:
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: source activate build_env
displayName: Active
Important note: as for now, if you pass the name of the environment to activate script, the environment must be created in the same job. However, if you're using prefix (i.e. path to the environment directory), it doesn't matter.
Maven is well installed on my gitlab-runner server. When executing mvn clean directly on my repo it works, when running my pipeline using Gitlab UI got this error :
bash: line 60: mvn: command not found
ERROR: Job failed: exit status 1
I notice that I tried to fix the problem by adding the before_script section in the .gitlab-ci.yml file :
before_script:
- export MAVEN_HOME=/usr/local/apache-maven
I add also the line :
environment = ["MAVEN_HOME=/usr/local/apache-maven"]
on the config.toml file.
the problem still persist, my executor is : shell.
Any advice!
I managed to fix the problem using this workaround:
script:
- $MAVEN_HOME/bin/mvn clean
Just add the maven docker image, add below line as first line:
image: maven:latest or image: maven:3-jdk-10 or image: maven:3-jdk-9
refer: https://docs.gitlab.com/ee/ci/examples/artifactory_and_gitlab/
For anyone experiencing similar issues, it might be a good idea to restart the gitlab runner ".\gitlab-runner.exe restart". Especially after fiddling with environmental variables.
There is an easier way:
Making changes in ~/.bash_profile not ~/.bashrc.
According to this document:
.bashrc it is more common to use a non-login shell
This document saying:
For certain executors, the runner passes the --login flag as shown above, which also loads the shell profile.
So it should not be ~/.bashrc, you can also try ~/.profile which It can hold the same configurations, which are then also accessible by other shells
In my scenario I do following things:
1. Set gitlab-runner's user password.
passwd gitlab-runner
2. Login gitlab-runner.
su - gitlab-runner
3. Make changes in .bash_profile
Add maven to PATH:
$ export M2_HOME=/usr/local/apache-maven/apache-maven-3.3.9
$ export M2=$M2_HOME/bin
$ export PATH=$M2:$PATH
You can include these commands in $HOME/.bashrc
I hope you had figure out your question. I met the same question when I build my ci on my server.
I use the shell as the executer for my Runner.
here are the steps to figure out.
1 check the user on the runner server
if you had install maven on the runner server successfully, maybe it just successful for the root, u can check the real user for the ci process.
job1:
stage: test
script: whoami
if my case, it print gitlab-runner, not the root
2 su the real user, check mvn again
In this time, it print error as same as the Gitlab ci UI.
3 install maven for the real user. run the pipeline again.
You can also use as per below in the .gitlab-ci.yml
before_script:
- export PATH=$PATH:/opt/apache-maven-3.8.1/bin