Github Action appleboy/ssh-action: How to add Go command - go

Here, I'm trying to add go command when I deploy my app in GitHub Action.
The prompt in github action shows
err: bash: line 15: go: command not found .
*note : I already installed go and the go command works through my ssh connection
I'm expecting the go command works when I deploy it through Github Action using appleboy/ssh-action, how to do that?
edit:
here's my github action script:
- name: Deploy App and Deploy
uses: appleboy/ssh-action#v0.1.2
with:
host: ${{secrets.SSH_HOST}} # IP address of the server you wish to ssh into
key: ${{secrets.SSH_KEY}} # Private or public key of the server
username: ${{ secrets.SSH_USERNAME }} # User of the server you want to ssh into
script: |
export NVM_DIR=~/.nvm
source ~/.nvm/nvm.sh
export GO_DIR=/usr/local/go
source /usr/local/go/bin/go
cd /root
cd go
cd deploying
echo "Cloning Git Repo to /root/deploying"
git clone https://aldhanekaa:${{secrets.GITHUB_TOKEN}}#github.com/aldhanekaa/Golang-audio-chat.git
echo "Building Golang source"
cd Golang-audio-chat
go build
well for example, for adding npm command on appleboy/ssh-action, we just need to add
export NVM_DIR=~/.nvm
source ~/.nvm/nvm.sh
but how about go?

As user VonC said, I can try by points the binary file of go command, but since /usr/local/go/bin/go is not short as go, I decided to add the go binary to $PATH.
So the solution comes up as;
adding PATH="/usr/local/go/bin/:$PATH" at the first execution of the github action appleboy/ssh-action script.
- name: Deploy App and Deploy
uses: appleboy/ssh-action#v0.1.2
with:
host: ${{secrets.SSH_HOST}} # IP address of the server you wish to ssh into
key: ${{secrets.SSH_KEY}} # Private or public key of the server
username: ${{ secrets.SSH_USERNAME }} # User of the server you want to ssh into
script: |
export NVM_DIR=~/.nvm
source ~/.nvm/nvm.sh
PATH="/usr/local/go/bin/:$PATH"

Check first your PATH:
echo $PATH
If /usr/local/go/bin/ is not part of it, try:
/usr/local/go/bin/go build

Related

Build Go project in Jenkins with dependencies in private BitBucket repository using SSH keys

I'm trying to set up automated build for Go projects. We have some internal dependencies however available on our private BitBucket. Credentials are needed however to have go access these. I'm able to read the main repo using option Git and SSH but I'm able to download the dependencies from BitBucket.
I already tried with:
git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/"
export 'GOPRIVATE=bitbucket.org/*'
however this doesn't seem work, since the output:
+ go version
22:33:27 go version go1.16.4 darwin/arm64
+ go test
22:33:29 go: missing Mercurial command. See https://golang.org/s/gogetcmd
22:33:30 go: bitbucket.org/repositorie_url: reading https://api.bitbucket.org/2.0/repositorie_url/dependency_repo 403 Forbidden
22:33:30 server response: Access denied. You must have write or admin access.
How could I make sure go get or go install gets access to our private repository in a secure way?
NOTE: go test sems to ignore git configuration and it's trying to reach dependencies from https, in addition I have some Mercurial errors.
Go private dependencies are a bit complicated to resolve. Try downloading the dependencies before you do go test or anything else. There are 2 solutions I can present, try and let me know which one worked for you:
1. Using ssh key
When you have a ssh key that has access to the private repos, try this
(Assuming the ssh is stored and retrived as env var with name BITBUCKET_SSH_KEY) :
mkdir -p ~/.ssh
echo "$BITBUCKET_SSH_KEY" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keygen -F bitbucket.org || ssh-keyscan bitbucket.org >>~/.ssh/known_hosts
git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/"
go env -w GOPRIVATE=bitbucket.org
go mod download
2. Using .netrc
You can generate a login token from bitbucket. With this token, have two env vars BITBUCKET_LOGIN and BITBUCKET_TOKEN and then try following:
go env -w GOPRIVATE=bitbucket.org
echo "machine bitbucket.org login ${BITBUCKET_LOGIN} password ${BITBUCKET_TOKEN}" > ~/.netrc
go mod download
Hello I finally found the error and the issue was:
the $PATH of the enviroment!
Seams that the computer has a different path of the default path of jenkins.
If you want to use a certain enviroment of your local computer you should add a new variable $PATH in the enviroment, print $PATH in the local cmd and compare the $PATH on jenkinsfile
the solution in jenkinsfile:
pipeline {
agent {
label 'macmini'
}
environment {
PATH = "$HOME/go/bin:" +
"/usr/local/bin:/Library/Apple/usr/bin" +
"$PATH"...
}
}
console:
echo $PATH
# overrite $PATH enviroment
$PATH = "$HOME/go/bin:" +
"$HOME/go/bin:" +
"/usr/local/bin:/Library/Apple/usr/bin" +
"$PATH"...

GitHub -> GCP, use gcloud commands inside shell script

I have a workflow in GitHub that will execute a shell script, and inside this script I need to use gsutil
In my workflow yml-file I have the following steps:
name: Dummy Script
on:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
environment: alfa
env:
_PROJECT_ID: my-project
steps:
- uses: actions/checkout#v2
- name: Set up Cloud SDK for ${{env._PROJECT_ID}}
uses: google-github-actions/setup-gcloud#master
with:
project_id: ${{env._PROJECT_ID}}
service_account_key: ${{ secrets.SA_ALFA }}
export_default_credentials: true
- run: gcloud projects list
- name: Run script.sh
run: |
path="${GITHUB_WORKSPACE}/script.sh"
chmod +x $path
sudo $path
shell: bash
And the script looks like:
#!/bin/bash
apt-get update -y
gcloud projects list
The 2nd step in yml (run: gcloud projects list) works as expected, listing the projects SA_USER have access to.
But when running the script in step 3, I get the following output:
WARNING: Could not open the configuration file: [/root/.config/gcloud/configurations/config_default].
ERROR: (gcloud.projects.list) You do not currently have an active account selected.
Please run:
$ gcloud auth login
to obtain new credentials.
If you have already logged in with a different account:
$ gcloud config set account ACCOUNT
to select an already authenticated account to use.
Error: Process completed with exit code 1.
So my question is:
How can I run a shell script file and pass on the authentication I have for my service account so I can run gcloud commands from a script file?
Due to reasons, it's a requirement that the script file should be able to run locally on developers computers, and from GitHub.
The problem seemed to be that the environment variables were not inherited when running with sudo. There are many ways to work around this, but I was able to confirm that it would run with sudo -E. Of course, if you don't need to run with sudo, you should remove it, but I guess it's necessary.
(The reproduction code was easy for me to reproduce it. Thanks)

git-secret-reveal failed on github actions

I'm trying to use Github Actions for CI. I've created some secrets in repository on GitHub and encrypt some files in sources with a git-secret tool. In the end, I wrote netx yml-script as action for github
build:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Configure GPG Key
uses: crazy-max/ghaction-import-gpg#v3
with:
gpg-private-key: ${{ secrets.GPG_SIGNING_KEY }}
passphrase: ${{ secrets.SECRET_PWD }}
git-user-signingkey: true
git-commit-gpgsign: true
- name: Reveal secrets
env:
SECRET_PWD: ${{ secrets.SECRET_PWD }}
run: |
sudo apt install git-secret
git secret tell my#email.com
git secret reveal -p $(echo $SECRET_PWD | sed 's/./& /g')
- name: Build images
run: docker-compose build
I suppose this described next pipeline:
Checkout current branch
Install required tools for gpg with a PK (gpg key?) and PWD
Add user with email from PK to white list
Decrypt .secret files
And finally build docker images.
Am I right?
My problem is steps 3-4. I've got an error in logs
> Setting up git-secret (0.2.3-1) ...
> Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
> done. my#email.com added as someone who know(s) the secret.
> cleaning up...
> Error: Process completed with exit code 1.
I've checked my solution on local machine (linux) and it works like a charm.
Well, maybe someone knows where is my mistake in yml-script?
I would guess that the problem is the "git secret tell" line. The "tell" step needs to be done in advance by someone else (you) who already has the authority to reveal the secrets. From the documentation:
Now add this person to your secrets repo by running git secret tell
persons#email.id (this will be the email address associated with the
public key)
The newly added user cannot yet read the encrypted files. Now,
re-encrypt the files using git secret reveal; git secret hide -d, and
then commit and push the newly encrypted files.
It looks like the "git secret reveal" step failed. Did you re-encrypt and push the secret files after calling "git secret tell my#email.com" locally?
In the github action itself, you don't need to run the "tell" step again.

pm2 deploy fails after full fetch

I want to deploy a simple app to my ec2 instance but I got this error:
bash: line 0: cd: /home/ubuntu/source: No such file or directory
fetch failed
Deploy failed
1
I don't understand why is there a 'source' directory when i haven't created it on my virtual or local machine. It's like pm2 created it on its own. Can someone explain why is it there and how can I deploy it successfully?
My ecosystem.config.js:
module.exports = {
apps: [{
name: 'puk',
script: 'project/'
}],
deploy: {
production: {
user: 'ubuntu',
host: 'ec2-35-180-119-129.eu-west-3.compute.amazonaws.com',
key: '~/.ssh/id_rsa.pub',
ref: 'origin/master',
repo: 'git#github.com:nalnir/pukinn.git',
path: '/home/ubuntu/',
'post-deploy': 'npm install && pm2 startOrRestart ecosystem.config.js'
}
}
}
Full log after pm2 deploy production command:
--> Deploying to production environment
--> on host ec2-35-180-119-129.eu-west-3.compute.amazonaws.com
○ deploying origin/master
○ executing pre-deploy-local
○ hook pre-deploy
○ fetching updates
○ full fetch
bash: line 0: cd: /home/ubuntu/source: No such file or directory
fetch failed
Deploy failed
1
I have faced the same issue and got this thread, but the above answer/comments are not very helpful for me. There is no helpful document on the PM2 website too. So I do one by one all steps from initial:
Do first setup before calling update command on any existing folder. Because PM2 create their own folder structure: [Current, Source, Shared] (Read here)
pm2 deploy ecosystem.config.js stage setup
When you want to deploy new code then do with the below command:
pm2 deploy ecosystem.config.js stage update --force
Why --force?
You may have some changes in your local system that aren’t pushed inside your git repository, and since the deploy script get the update via git pull they will not be on your server. If you want to deploy without pushing any data, you can append the --force option:
My deploy object in ecosystem.config.js file :
deploy : {
stage : {
// Deploy New: pm2 deploy ecosystem.config.js stage setup
// Update: pm2 deploy ecosystem.config.js stage update --force
user : '_MY_SERVER_USER_NAME_', // remote server username
host : '_MY_REMOTE_SERVER_IP_', // remote server ip
ref : 'origin/stage', // remote repo name
repo : 'git#bitbucket.org:_MY_REPO_SSH_CLONE_URL_.git', // repo url
path : '_REMOTE_DIRECTIVE_', // src root paths like /home/ubuntu/
'pre-deploy-local': '',
'post-deploy' : 'npm install && pm2 reload ecosystem.config.js --only MyAppName',
'pre-setup': ''
}
}
I Hope, It will helpful for others.
script parameter expects the actual script path, not the directory
You should change it to the name of your main script, for example: script: './index.js'
You should also update your deploy.production.path to something like /home/ubuntu/project
As stated in the Ecosystem file reference, script expects the Path of the script to launch

What is the correct usage of cache/artifacts in Gitlab CI?

I am facing an issue when cached files are not used in project builds. In my case, I want to download composer dependencies in build stage and then add them into final project folder after all other stages succeeds. I thought that if you set cache attribute into .gitlab-ci.yml file, it will be shared and used in other stages as well. But this sometime works and sometimes not.
Gitlab version is 9.5.4
Here is my .gitlab-ci.yml file:
image: ponk/debian:jessie-ssh
variables:
WEBSERVER: "user#example.com"
WEBSERVER_DEPLOY_DIR: "/domains/example.com/web-presentation/deploy/"
WEBSERVER_CDN_DIR: "/domains/example.com/web-presentation/cdn/"
TEST_VENDOR: '[ "$(ls -A ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}/vendor)" ]'
cache:
key: $CI_PIPELINE_ID
untracked: true
paths:
- vendor/
before_script:
stages:
- build
- tests
- deploy
- post-deploy
Build sources:
image: ponk/php5.6
stage: build
script:
# Install composer dependencies
- composer -n install --no-progress
only:
- tags
- staging
Deploy to Webserver:
stage: deploy
script:
- echo "DEPLOYING TO ... ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}"
- ssh $WEBSERVER mkdir -p ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}
- rsync -rzha app bin vendor www .htaccess ${WEBSERVER}:${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}
- ssh $WEBSERVER '${TEST_VENDOR} && echo "vendor is not empty, build seems ok" || exit 1'
- ssh $WEBSERVER [ -f ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}/vendor/autoload.php ] && echo "vendor/autoload.php exists, build seems ok" || exit 1
- echo "DEPLOYED"
only:
- tags
- staging
Post Deploy Link PRODUCTION to Webserver:
stage: post-deploy
script:
- echo "BINDING PRODUCTION"
- ssh $WEBSERVER unlink ${WEBSERVER_DEPLOY_DIR}production-latest || true
- ssh $WEBSERVER ln -s ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA} ${WEBSERVER_DEPLOY_DIR}production-latest
- echo "BOUNDED $CI_COMMIT_SHA -> production-latest"
- ssh $WEBSERVER sudo service php5.6-fpm reload
environment:
name: production
url: http://www.example.com
only:
- tags
Post Deploy Link STAGING to Webserver:
stage: post-deploy
script:
- echo "BINDING STAGING"
- ssh $WEBSERVER unlink ${WEBSERVER_DEPLOY_DIR}staging-latest || true
- ssh $WEBSERVER ln -s ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA} ${WEBSERVER_DEPLOY_DIR}staging-latest
- echo "BOUNDED ${CI_COMMIT_SHA} -> staging-latest"
- ssh $WEBSERVER sudo service php5.6-fpm reload
environment:
name: staging
url: http://staging.example.com
only:
- staging
In Gitlab documentation it says: cache is used to specify a list of files and directories which should be cached between jobs.
From what I understand I've set up cache correctly - I have untracked set to true, path includes vendor folder and key is set to Pipeline ID, which should be the same in other stages as well.
I've seen some set ups which contained Artifacts, but unless you use it with Dependencies, it shouldn't have any effect.
I don't know what I'm doing wrong. I need to download composer dependencies first, so I can copy them via rsync in next stage. Do you have any ideas/solutions? Thanks
Artifacts should be used to permanently make available any files you may need at the end of a pipeline, for example generated binaries, required files for the next stage of the pipeline, coverage reports or maybe even a disk image. But cache should be used to speed up the build process, for example if you compiling a C/C++ binary it usually takes a long time for the first build but subsequent builds are usually faster because it doesn't start from scratch, so if you were to store the temporary files made by the compiler by using cache, it would speed up the compilation across different pipelines.
So to answer you, you should use artifacts because you seem to need to run composer every pipeline but want to pass on the files to the next job. You do not need to explicitly define dependencies in your gitlab-ci.yml because if not defined each job pulls all the artifacts from all previous jobs. Cache should work but it is unreliable and is better for a config where it makes it better but is not a necessity.

Resources