Im quite new to bitbucket pipeline and i was looking into how to run my jmeter script in the pipeline without using Jenkins or bamboo. I have created a bitbucket-pipelines.yml file and got an issue "jmeter: command not found"
here is the script that i created.
pipelines:
branches:
master:
- step:
name: Jmeter
script:
- jmeter run Observability_Test.jmx
If jmeter command is not found you need to install JMeter prior to trying to launch the test.
Example configuration:
pipelines:
default:
- step:
script:
- apt-get update && apt-get install --no-install-recommends -y openjdk-11-jre wget
- cd /opt && wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.4.3.tgz && tar xf apache-jmeter-5.4.3.tgz
- cd apache-jmeter-5.4.3/bin && ./jmeter -n -t /path/to/Observability_Test.jmx
- cd apache-jmeter-5.4.3/bin && ./jmeter -n -t /path/to/Observability_Test.jmx
More information: Tips for scripting tasks with Bitbucket Pipelines
Related
I really can't understand why on my machine via SSH I can execute npm commands and in deploy pipeline is not work? Wtf
Starting deploy
Already up to date.
v16.7.0
7.20.3
Deploy end
Result in CircleCI
Starting deploy
Already up to date.
deploy.sh: line 6: node: command not found
deploy.sh: line 7: npm: command not found
Deploy end
version: 2.1
# Define the jobs we want to run for this project
jobs:
pull-and-build:
docker:
- image: arvindr226/alpine-ssh
steps:
- checkout
- run: ssh -o StrictHostKeyChecking=no ubuntu#xx.xx.xx.xx "cd ~/apps/clm/core; sudo bash deploy.sh"
# Orchestrate our job run sequence
workflows:
version: 2
build-project:
jobs:
- pull-and-build:
filters:
branches:
only:
- Desenvolvimento
My bash script
#!/bin/bash
echo "Starting deploy"
cd ~/apps/clm/core
git pull
node -v
npm -v
echo "Deploy end"
Thanks a lot to anyone who helps.
I really don't understand what's going on I've tried looking for everything...
The problem was in the PATH!
ssh -o StrictHostKeyChecking=no xxx#xx.xxx.xx.xx "source ~/.nvm/nvm.sh; cd ~/apps/xxx/xxx; git pull; npm install; pm2 restart core client"
I'm using Concourse for building my java package.
In order to run integration tests of that package, I need a local instance of elasticsearch present.
Prior to ES version 8, all I was doing was installing ES in Docker image that I would then use as Concourse task's image resource to build my java package in:
FROM openjdk:11-jdk-slim-stretch
RUN apt-get update && apt-get install -y procps
ADD "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.3-amd64.deb" /es.deb
RUN dpkg -i es.deb
RUN rm es.deb
Later I would just start it right before building with:
/etc/init.d/elasticsearch start
Problems started when upgrading ES to version 8. That init.d file does not seem to exist anymore. Some of the advices I found suggest running ES as a container, so running ES container inside of the concourse container which seems a bit too complex for my use case.
If you had similar problems in your projects, how did you solve them?
This is what I would do:
Build your docker image off of an official Elastic docker image, e.g.:
FROM elasticsearch:8.2.2
USER root
RUN apt update && apt install -y sudo
Start Elastic within your task. Suppose the image got pushed to oozie/elastic on docker. Then the following pipeline job should succeed:
jobs:
- name: run-elastic
plan:
- task: integtest
config:
platform: linux
image_resource:
type: docker-image
source:
repository: oozie/elastic
run:
path: /bin/bash
args:
- -c
- |
(sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -Expack.security.enabled=false -E discovery.type=single-node > elastic.log) &
while ! curl http://localhost:9200; do sleep 10; done
It should result in the following task run:
I have the following bitbucket YML file which is currently giving me a 552 FTP error code:
image: bitnami/git
pipelines:
branches: # Pipelines that run automatically on a commit to a branch can also be triggered manually
live:
- step:
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp push -u $FTP_USERNAME_LIVE -p $FTP_PASSWORD_LIVE $FTP_URL_LIVE
dev:
- step:
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp push -u $FTP_USERNAME -p $FTP_PASSWORD -v $FTP_URL
Please ignore the live branch as we haven't updated this yet. it's the dev branch i'm having issues with right now.
I've tried changing the FTP details, checking the storage available on the CPanel account etc and it still wont work and is giving this error. The FTP account is set to unlimited too. Does anyone have any suggestions how to get around this?
I've been following a lot of tutorials on CI using Python but the tutorials seem to stop there and rarely take the next step to CD. I'm a sole developer as well.
I've setup a project on Github that runs locally on my PC and is not a web app. I've connected it to CircleCI for CI. Here is my config.yml file.
version: 2
jobs:
build:
docker:
- image: circleci/python:3.7
working_directory: ~/repo
steps:
# Step 1: obtain repo from GitHub
- checkout
- run:
name: install dependencies
command: |
sudo apt-get update
pip install -r requirements.txt
- run:
name: run tests
command: |
python -m pytest -v
Everything runs great and I get an email from CircleCI alerting me the build failed when I make a push to master on github and one the of the pytests fail.
So my question, is what is the next step here? I have a few thoughts but am not sure on any of them honestly.
Create separate test and prod versions of the code. Automate updating the prod version when the test version builds with no errors. However, not sure what tools to use for this.
Push to project to Dockerhub. This seems redundant to me though, because Docker would run the same pytests that CircleCI is running. I'm not sure how this would even help with CD at this point.
Could someone please provide some guidance on next steps here?
Currently you have only one job build, so you can add more jobs under the jobs section. So what you want to do here is:
add test
build prod version
Push to Dockerhub
Please use config 2.1 to enable the workflows.
version: 2.1
jobs:
build:
docker:
- image: circleci/python:3.7
working_directory: ~/repo
steps:
# Step 1: obtain repo from GitHub
- checkout
- run:
name: install dependencies
command: |
sudo apt-get update
pip install -r requirements.txt
- run:
name: run tests
command: |
python -m pytest -v
test:
docker:
- image: circleci/python:3.7
steps:
- checkout
- run: echo "do your test here"
build-prod:
docker:
- image: circleci/python:3.7
steps:
- checkout
- run: echo "build your app"
push-to-dockerhub:
docker:
- image: circleci/python:3.7
steps:
- checkout
- setup_remote_docker # this is necessary to use docker daemon
- run: echo "do docker login and docker push here"
workflows:
build-and-push:
jobs:
- build
- test
requires:
- build
- build-prod
requires:
- test
- push-to-dockerhub
requires:
- build-prod
Please make sure we're using requires to run the job only when the required job is finished successfully.
Well definitely I've not tested the config on my end, but it's like above config. You have more configuration documents here - so please take a look for it to make it perfectly work.
https://circleci.com/docs/2.0/configuration-reference/
I'd like to build a docker container from command line only - on windows.
On Linux it works like this:
docker build -t tcpdump - <<EOF
FROM ubuntu
RUN apt-get update && apt-get install -y <packages here>
EOF
Any ideas how to port it to windows?
I believe you can use this.
ECHO "FROM python:3
RUN pip install requests" | docker build -t yourimage:tag -
Please take a look at this doc as well.