Npm commands not running in SSH - continuous-integration

I really can't understand why on my machine via SSH I can execute npm commands and in deploy pipeline is not work? Wtf
Starting deploy
Already up to date.
v16.7.0
7.20.3
Deploy end
Result in CircleCI
Starting deploy
Already up to date.
deploy.sh: line 6: node: command not found
deploy.sh: line 7: npm: command not found
Deploy end
version: 2.1
# Define the jobs we want to run for this project
jobs:
pull-and-build:
docker:
- image: arvindr226/alpine-ssh
steps:
- checkout
- run: ssh -o StrictHostKeyChecking=no ubuntu#xx.xx.xx.xx "cd ~/apps/clm/core; sudo bash deploy.sh"
# Orchestrate our job run sequence
workflows:
version: 2
build-project:
jobs:
- pull-and-build:
filters:
branches:
only:
- Desenvolvimento
My bash script
#!/bin/bash
echo "Starting deploy"
cd ~/apps/clm/core
git pull
node -v
npm -v
echo "Deploy end"
Thanks a lot to anyone who helps.
I really don't understand what's going on I've tried looking for everything...

The problem was in the PATH!
ssh -o StrictHostKeyChecking=no xxx#xx.xxx.xx.xx "source ~/.nvm/nvm.sh; cd ~/apps/xxx/xxx; git pull; npm install; pm2 restart core client"

Related

How to run Jmeter Script in bitbucket Pipeline

Im quite new to bitbucket pipeline and i was looking into how to run my jmeter script in the pipeline without using Jenkins or bamboo. I have created a bitbucket-pipelines.yml file and got an issue "jmeter: command not found"
here is the script that i created.
pipelines:
branches:
master:
- step:
name: Jmeter
script:
- jmeter run Observability_Test.jmx
If jmeter command is not found you need to install JMeter prior to trying to launch the test.
Example configuration:
pipelines:
default:
- step:
script:
- apt-get update && apt-get install --no-install-recommends -y openjdk-11-jre wget
- cd /opt && wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.4.3.tgz && tar xf apache-jmeter-5.4.3.tgz
- cd apache-jmeter-5.4.3/bin && ./jmeter -n -t /path/to/Observability_Test.jmx
- cd apache-jmeter-5.4.3/bin && ./jmeter -n -t /path/to/Observability_Test.jmx
More information: Tips for scripting tasks with Bitbucket Pipelines

Gitlab-CI CE executor /usr/bin/bash: line 113: git: command not found

I have a local gitlab ce server and gitlab-ci runner, all running on docker container. And I just want to test out if the gitlab-ci is working with minimal code in .gitlab-ci.yml; However, it ended up with the ci does not run at all, and git version wasn't posting as well, and showing error codes
Running with gitlab-runner 14.2.0 (58ba2b95)
on GitLab-cicd-practice GPdsWyY7
Preparing the "shell" executor 00:00
Using Shell executor...
Preparing environment 00:00
Running on gitlab...
Getting source from Git repository 00:01
Fetching changes...
bash: line 113: git: command not found
ERROR: Job failed: exit status 1
Code for .gitlab-ci.yml
build:
stage: build
before_script:
- git --version
script: echo hello
test:
script: echo
stage: test
needs: [build]
Looking at this, the runner is running using shell, therefore the problem is that git is not installed on the machine that's running the gitlab-runner.
To fix this just install git on the machine so that the gitlab-runner can use it.
If you're on linux you should be able to install it with apt or yum
apt install git
or
yum install git

How to integrate CD with CircleCI?

I've been following a lot of tutorials on CI using Python but the tutorials seem to stop there and rarely take the next step to CD. I'm a sole developer as well.
I've setup a project on Github that runs locally on my PC and is not a web app. I've connected it to CircleCI for CI. Here is my config.yml file.
version: 2
jobs:
build:
docker:
- image: circleci/python:3.7
working_directory: ~/repo
steps:
# Step 1: obtain repo from GitHub
- checkout
- run:
name: install dependencies
command: |
sudo apt-get update
pip install -r requirements.txt
- run:
name: run tests
command: |
python -m pytest -v
Everything runs great and I get an email from CircleCI alerting me the build failed when I make a push to master on github and one the of the pytests fail.
So my question, is what is the next step here? I have a few thoughts but am not sure on any of them honestly.
Create separate test and prod versions of the code. Automate updating the prod version when the test version builds with no errors. However, not sure what tools to use for this.
Push to project to Dockerhub. This seems redundant to me though, because Docker would run the same pytests that CircleCI is running. I'm not sure how this would even help with CD at this point.
Could someone please provide some guidance on next steps here?
Currently you have only one job build, so you can add more jobs under the jobs section. So what you want to do here is:
add test
build prod version
Push to Dockerhub
Please use config 2.1 to enable the workflows.
version: 2.1
jobs:
build:
docker:
- image: circleci/python:3.7
working_directory: ~/repo
steps:
# Step 1: obtain repo from GitHub
- checkout
- run:
name: install dependencies
command: |
sudo apt-get update
pip install -r requirements.txt
- run:
name: run tests
command: |
python -m pytest -v
test:
docker:
- image: circleci/python:3.7
steps:
- checkout
- run: echo "do your test here"
build-prod:
docker:
- image: circleci/python:3.7
steps:
- checkout
- run: echo "build your app"
push-to-dockerhub:
docker:
- image: circleci/python:3.7
steps:
- checkout
- setup_remote_docker # this is necessary to use docker daemon
- run: echo "do docker login and docker push here"
workflows:
build-and-push:
jobs:
- build
- test
requires:
- build
- build-prod
requires:
- test
- push-to-dockerhub
requires:
- build-prod
Please make sure we're using requires to run the job only when the required job is finished successfully.
Well definitely I've not tested the config on my end, but it's like above config. You have more configuration documents here - so please take a look for it to make it perfectly work.
https://circleci.com/docs/2.0/configuration-reference/

How do I build using my GitLab pipeline for windows

I have the following .gitlab-ci.yml...
stages:
- test
- build
- art
image: golang:1.9.2
variables:
BIN_NAME: example
ARTIFACTS_DIR: artifacts
GO_PROJECT: example
GOPATH: /go
before_script:
- mkdir -p ${GOPATH}/src/${GO_PROJECT}
- mkdir -p ${CI_PROJECT_DIR}/${ARTIFACTS_DIR}
- go get -u github.com/golang/dep/cmd/dep
- cp -r ${CI_PROJECT_DIR}/* ${GOPATH}/src/${GO_PROJECT}/
- cd ${GOPATH}/src/${GO_PROJECT}
test:
stage: test
script:
# Run all tests
go test -run ''
build:
stage: build
script:
# Compile and name the binary as `hello`
- go build -o hello
- pwd
- ls -l hello
# Execute the binary
- ./hello
# Move to gitlab build directory
- mv ./hello ${CI_PROJECT_DIR}
artifacts:
paths:
- ./hello
This works great for linux but i now need to do the same so that it builds a windows executable.
I then plan to run a scheduled script to download the artifacts.
The other option is I run a virtual linux server on the windows server and use that to run my go binarys.
I know i need to change the image to windows but can't see to find an appropriate one online (one that is configured for golang).
Or is it possible to have this docker image build a windows exe?
This is not a gitlab question, but a go question.
Since Go version 1.5 cross-compiling has become very easy.
GOOS=windows GOARCH=386 go build -o hello.exe hello.go
You can now run hello.exe on a Windows machine

Testing ASP.NET Core Docker Container with Travis CI

So I've configured my .travis.yml to build and test my ASP.NET Core project, but now I've to configure it to run in docker. So far so good, I've the Dockerfile for the build, but then I started to figure:
Should I run the testing inside the Docker Container or outside? Or Does it even matter?
If I should do it inside how could this be achieved? since dotnet test doesn't have **/*/ support and my container doesn't run my bash script.
UPDATE:
Or should I build and test outside and later create the dockerimage?
The Dockerfile is:
FROM microsoft/dotnet:latest
ARG source=.
WORKDIR /usr/src/project
COPY $source .
RUN dotnet restore
EXPOSE 5000
CMD dotnet build **/*/project.json
And the .sh is:
#!/bin/bash
cd test/
for D in `find ./ -maxdepth 1 -type d`
do
if [ -a ./project.json ]
then
( cd ${D}; dotnet test;)
fi
done
Any suggestions are greatly appreciated.
So I decided that the docker build and publish should only be done if the build and test succeeded
.travis.yml
language: csharp
sudo: required
solution: Solution.sln
mono: none
dotnet: 1.0.0-preview2-1-003177
services:
    - docker
install:
    - npm install -g bower
    - npm install -g gulp
before_script:
    - chmod a+x ./scripts/test.sh
script:
    - dotnet restore && dotnet build **/*/project.json
    - ./scripts/test.sh --quite verify
    - if [ "$TRAVIS_BRANCH" == "master" ] ; then
      dotnet publish --no-build src/Main -o publish/ ;
      docker build -t project . ;
      fi
    
after_success:
    - if [ "$TRAVIS_BRANCH" == "master" ] ; then
      /* Push to docker repo */
      fi

Resources