Generate Locust reports on a Teamcity build server - teamcity

I have been able to successfully deploy my locust scripts to Teamcity but I can't find the reports (CSV and HTML) that I specified. I included ls in my command line build step and I can see that the reports are in the working directory. How do I get these reports into the artifacts tab?
My execution command looks like this:
locust -f public_apis/themes.py -u 50 -r 10 -t 300s --headless --print-stats --csv /reports/csv/result_for --csv-full-history --html /reports/html/docker_loadtest_result.html

I'm not familiar with Teamcity, but your paths you're trying to save the reports to suggest Docker could be involved. If that's true, you'd have to ensure you mount a path from the host machine to a volume in Docker and then save the reports there. If you don't, they'll be inside the Docker container if they get created.
How to mount a host directory in a Docker container

Related

How can we send aws address in my local machine to jenkins image run in docker container?

I am trying to send the path of aws in my host machine to jenkins that will be run in a docker container. So I downloaded jenkins image and I am trying to use aws cli command in jenkins pipeline in order to build nodejs application and then deploy it to s3 bucket. For the I need aws cli in jenkins image that I am running through docker. As far as I know, once you run any image in docker container, then it will be a seprate environemnt in itself so jenkins will not know that I have aws installed in my mac unless I send it address of aws in my mac which is what I am trying to do with
-v $(which aws): $(which aws)
command.
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $(which aws):$(which aws) jenkins/jenkins:2.190.2
However after I run this container in command line, it shows the following error response
docker: Error response from daemon: Mounts denied:
The path /usr/local/bin/aws
is not shared from OS X and is not known to Docker.
According to some of the answers I found in stackoverflow I then tried to add the address of aws in Docker file sharing panel. When I added the address of aws in docker, it again shows that
The path /usr is reserved by Docker however it may be possible to export specific subdirectories.
I have been able to get around this. I tried adding the whole
usr/local/bin/aws
in docker file sharing panel but still it shows the same problem. Does anyone have any idea what other things we can do in order to send the address of aws in my local container to jenkins image that I am trying to run in docker container?
You need to install aws-cli in your docker image, and then you will able to use aws-cli inside your container.
FROM jenkins/jenkins:2.190.2
USER root
RUN apt-get update && \
apt-get install awscli -y
USER jenkins
-v or volumes are not designed to bind the host executable, but they are designed for files and folders for persistent storage. If you need executable you need to add in your docker image.
To be able to save (persist) data and also to share data
between containers, Docker came up with the concept of volumes. Quite
simply, volumes are directories (or files) that are outside of the
default Union File System and exist as normal directories and files on
the host filesystem.
understanding-volumes-docker
For this question
I am trying to use aws CLI command in jenkins pipeline in order to
build nodejs application and then deploy it to s3 bucket.
If you are inside AWS, you can assign the IAM role to Jenkins server and you will not be required to bind host keys.
Or if you are outside AWS, then you just need bind host aws config and credentials ,
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $HOME/.aws/:/var/jenkins_home/.aws/ jenkins/jenkins:2.190.2

Running jenkins on docker (on Windows)... Proper steps to run pipeline job

I'm trying to implement docker with jenkins and am not sure if I am on the right track.
Given:
Running jenkins on docker from Windows
Plan on fetching code from github, building the solution, running functional tests, etc on a container somehow
What I've currently done:
(1) Installed Docker on Windows
(2) Successfully launched Jenkins on Docker with the command
"docker run –name myJenkins -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts"
I believe this step binds the docker volume to my host machine's directory. This allows me to view and access the Jenkins content.
(3) In my host machine's Jenkins directory, I've created a plugin.txt (containing a variety of Jenkins plugins I want installed) and a Dockerfile. The Dockerfile installs the specified plugins in the plugins.txt file.
FROM jenkins/jenkins:lts
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
(4) In the windows command prompt, I built the Dockerfile with the command "docker build -t new_jenkins_image ."
(5) I stop my current container "myJenkins" and create a new container with the command "docker run –name myJenkins2 -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ new_jenkins_image". This loads up Jenkins with the newly installed jenkins plugins.
What I'm stuck/confused on
(1) Do I have to create a new container with a new name every time I want to install new jenkins plugins through the Dockerfile? This seems like a manual process as well... There has to be a better way.
(2) I started a basic jenkins pipeline job with the "Pipeline script from SCM" option. I entered in the correct repository URL and credentials but left the "Script Path" blank for now (I do not have a Jenkinsfile yet). When I execute the build, Jenkins did not fetch the code from github.
java.lang.IllegalArgumentException: Empty path not permitted.
at org.eclipse.jgit.treewalk.filter.PathFilter.create(PathFilter.java:80)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:205)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:249)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:281)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:171)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:165)
at jenkins.plugins.git.GitSCMFileSystem$3.invoke(GitSCMFileSystem.java:193)
at org.jenkinsci.plugins.gitclient.AbstractGitAPIImpl.withRepository(AbstractGitAPIImpl.java:29)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.withRepository(CliGitAPIImpl.java:72)
at jenkins.plugins.git.GitSCMFileSystem.invoke(GitSCMFileSystem.java:189)
at jenkins.plugins.git.GitSCMFile.content(GitSCMFile.java:165)
at jenkins.scm.api.SCMFile.contentAsString(SCMFile.java:338)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:110)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:67)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:293)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
I believe it's because the docker container does not have git installed? The container cannot access the Git or MSBuild from my host machine... Do I have to create a new container here to simply fetch the code?
Can someone explain to me what I'm missing or where I went wrong?
From my understanding, the process goes like this: Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
Where does the Dockerfile come into play here? Is my thought process on the right track?
you need to create container every time if you change/update the image. but it not required to give a new name each time. Did you stopped and removed previously running container? if not so docker gives errors like same name container cannot start. So stop and remove your previous container. and you will be able to start a new container with an updated image.
Yes, you need to install git in the same container to pull the code. it cannot access git on the host machine. But the error you are showing is like validation error. (i mean Jenkins validates input even before trying to pull the code. if you add some fake name it will throw next error like git not found)
Your thought is on correct track. Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
At the end of the question, you mentioned about different Dockerfile, i assume you are talking about Dockerfile in your repository (git). you can run your pipeline in docker agent. This removes to setup everything on jenkins host means you dont need to install dependencies to run your pipeline code on host, for example, if you are trying to execute some nodejs code in pipe, you need to setup nodejs on Jenkins host before you run the pipe, to get rid of this you can run pipe in container where everything is pre-setup. But I don't think you can use this feature if you are running Jenkins itself in docker. you need to setup Jenkins on the host directly in that case.

Running a Docker image on Windows 10 using a Makefile

I apologize if this a silly question but when I type inside PowerShell of Window 10:
docker run -it -v C:\Users\Bob\Documents\test:/usr/python -w /usr/python
bob/python
It works just fine and I receive the following prompt:
root#63eef6ac2b96:/usr/python#
To avoid repeating the command over and over, I build a makefile that has the following command
docker:
docker run -it -v C:\Users\Bob\Documents\test:/usr/python -w /usr/python bob/python
when I try to execute
make docker
I receive the following error
PS C:\Users\Bob\documents\test> make docker
docker run -it -v C:\Users\Bob\Documents\test:/usr/python -w /usr/python
bob/python
c:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from
daemon: the working directory 'C:/MinGW/msys/1.0/python' i
s invalid, it needs to be an absolute path.
See 'c:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
make.exe": *** [docker] Error 125
Any suggestion is greatly appreciated.
You do not have to use a Makefile. Docker compose is what you are looking for.
In brief, you need to create a docker-compose.yml file and inside it describe all the desired steps. I am not aware of your full setup but I will try to provide a skeleton for your docker-compose file.
version: '3.7'(depends on your docker engine version)
services:
python_service(add a name of your choice):
build: build/ (The path of image's Dockerfile)
volumes:
- C:\Users\Bob\Documents\test:/usr/python
working_dir: /usr/python
In the snippet above:
-v flag replaced with volumes section
-w flag replced with working_dir section
How to use:
Now that your docker-compose file is ready, you need to use it. So you do not need to remember/repeat the docker run command, you will simple execute docker-compose up in the directory where your compose file is located and you will have your container up and running.
Note that this is a simple example on how to use docker-compose. It is a powerful feature allowing to start containers from multiple images, creating networks and much more. I would recommend you to read the official documentation for additional information.

Bitbucket Pipelines - is it possible to download additional file to project via curl?

We have a separate build for the frontend and backend of the application, where we need to pull the dist build of frontend to backend project during the build. During the build the 'curl' cannot write to the desired location.
In detail, we are using SpringBoot as backend for serving Angular 2 frontend. So we need to pull the frontend files to src/main/resources/static folder.
image: maven:3.3.9
pipelines:
default:
- step:
script:
- curl -s -L -v --user xxx:XXXX https://api.bitbucket.org/2.0/repositories/apprentit/rent-it/downloads/release_latest.tar.gz -o src/main/resources/static/release_latest.tar.gz
- tar -xf -C src/main/resources/static --directory src/main/resources/static release_latest.tar.gz
- mvn package -X
As a result of this the build fails with output of CURL.
* Failed writing body (0 != 16360)
Note: I've tried the same with maven-exec-plugin, the result was the same. The solution works on local machine naturally.
I would try running these commands from a local docker run of the image you're specifying (maven:3.3.9). I found that the most helpful way to debug things that were behaving differently in Pipelines vs. in my local environment.
To your specific question yes, you can download external content from the Pipeline run. I have a Pipeline that clones other repos from BitBucket via HTTP into the running container.

How to run a docker command in Jenkins Build Execute Shell

I'm new to Jenkins and I have been searching around but I couldn't find what I was looking for.
I'd like to know how to run docker command in Jenkins (Build - Execute Shell):
Example: docker run hello-world
I have set Docker Installation for "Install latest from docker.io" in Jenkins Configure System and also have installed several Docker plugins. However, it still didn't work.
Can anyone help me point out what else should I check or set?
John
One of the following plugins should work fine:
CloudBees Docker Custom Build Environment Plugin
CloudBees Docker Pipeline Plugin
I normally run my builds on slave nodes that have docker pre-installed.
I came across another generic solution. Since I'm not expert creating a Jenkins-Plugin out of this, here the manual steps:
Create/change your Jenkins (I use portainer) with environment variables DOCKER_HOST=tcp://192.168.1.50 (when using unix protocol you also have to mount your docker socket) and append :/var/jenkins_home/bin to the actual PATH variable
On your docker host copy the docker command to the jenkins image "docker cp /usr/bin/docker jenkins:/var/jenkins_home/bin/"
Restart the Jenkins container
Now you can use docker command from any script or command line. The changes will persist an image update.

Resources