I have TeamCity setup in docker-compose.yml
version: "3"
services:
server:
image: jetbrains/teamcity-server:2021.1.2
ports:
- "8112:8111"
volumes:
- ./data_dir:/data/teamcity_server/datadir
- ./log_dir:/opt/teamcity/logs
db:
image: mysql
ports:
- "3306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=111
- MYSQL_DATABASE=teamcity
teamcity-agent-1:
image: jetbrains/teamcity-agent:2021.1.2-linux-sudo
environment:
- SERVER_URL=http://server:8111
- AGENT_NAME=docker-agent-1
- DOCKER_IN_DOCKER=start
privileged: true
container_name: docker_agent_1
ipc: host
shm_size: 1024M
teamcity-agent-2:
image: jetbrains/teamcity-agent:2021.1.2-linux-sudo
environment:
- SERVER_URL=http://server:8111
- AGENT_NAME=docker-agent-2
- DOCKER_IN_DOCKER=start
privileged: true
container_name: docker_agent_2
ipc: host
shm_size: 1024M
teamcity-agent-3:
image: jetbrains/teamcity-agent:2021.1.2-linux-sudo
environment:
- SERVER_URL=http://server:8111
- AGENT_NAME=docker-agent-3
- DOCKER_IN_DOCKER=start
privileged: true
container_name: docker_agent_3
ipc: host
shm_size: 1024M
and I have E2E tests which I run in teamcity agents. As a result of tests execution they generate HTML report and in case tests are failed they generate video report as well. Everything working as expected locally without TeamCity. When I move it to TeamCity I setup to keep folder "reports" in artifacts. And I have the following behaviour in fact:
HTML reports are coming everytime updated
videos keep growing from build to build. I generate diff path with timestamp for folder name and for video names to avoid cache. If 1 test was failed and generated 1 video this video will come to artifacts of all next builds even they are passing and video folder should be empty
My question described exactly in jetbrains support in 2014
https://teamcity-support.jetbrains.com/hc/en-us/community/posts/206845765-Build-Agent-Artifacts-Cache-Cleanup
but I tried diff settings from there and there is no luck unfortunatelly
What I tried myself and what did not help:
tried to clean \system. artifacts_cache folder. Artifacts are still growing
tried to find a config for agent
in /data/teamcity_agent/conf/buildAgent.properties I place 2 new settings
teamcity.agent.filecache.publishing.disabled=true
teamcity.agent.filecache.size.limit.bytes=1
after agent restarting I see those 2 new settings in TeamCity webinterface which means that settings were applied
but behaviour is still the same. Maybe other settings should be used but I did not manage to find
what helps is pressing "Clean sources on this agent" in agent settings but press by hands it is not the way
It looks like a cache issue cause if I assign another agent accumulation starts from the beginning.
any suggestions are appeciated
Seems like I found an answer
https://www.jetbrains.com/help/teamcity/2021.1/clean-checkout.html#Automatic+Clean+Checkout
"Clean all files before build" option should be selected on the Create/Edit Build Configuration > Version Control Settings page
Related
I created a docker-compose.yml file in the root path as shown below
version : "3"
services:
db:
container_name: spring-db
image: mysql
platform: linux/x86_64 # 추가된 라인
environment:
MYSQL_DATABASE: spring_db
MYSQL_USER: spring_db
MYSQL_PASSWORD: spring_pw
MYSQL_ROOT_PASSWORD: root_pw
volumes:
- ./db/data:/var/lib/mysql:rw
ports:
- "3306:3306"
restart: always
app:
container_name: spring-app
image: openjdk:11-jdk
ports:
- "8080:8080"
volumes:
- ./app:/app
working_dir: /app
command: ["./gradlew", "bootrun"]
depends_on:
- db
restart: always
After creation, when I run docker-compose up -d in the project root, mysql succeeds, but the app project returns an error.
Upgrade
Sign in
spring-app
openjdk:11-jdk
RUNNING
Downloading https://services.gradle.org/distributions/gradle-7.4.1-bin.zip
...........10%...........20%...........30%...........40%...........50%...........60%...........70%...........80%...........90%...........100%
Welcome to Gradle 7.4.1!
Here are the highlights of this release:
- Aggregated test and JaCoCo reports
- Marking additional test source directories as tests in IntelliJ
- Support for Adoptium JDKs in Java toolchains
For more details see https://docs.gradle.org/7.4.1/release-notes.html
Starting a Gradle Daemon (subsequent builds will be faster)
> Task :compileKotlin
> Task :compileKotlin UP-TO-DATE
> Task :compileJava NO-SOURCE
> Task :processResources UP-TO-DATE
> Task :classes UP-TO-DATE
> Task :bootRunMainClassName UP-TO-DATE
> Task :bootRun FAILED
4 actionable tasks: 1 executed, 3 up-to-date
Search…
Stick to bottom
Please check the code and errors, bootrun fails and I don't know exactly where to fix it.
I am practicing how to set up the server simply using docker-compose. I just need to open the server using docker-compose with simple api code in Spring Boot, but I am unable to proceed in the above situation.
First of all, the question arose because of my carelessness.
It occurred because the openjdk version was incorrect. When I built the app image after changing the app image to openjdk17, it worked normally. You must match the jdk image to the java version you selected when creating the spring-boot project
I need to provide a POC as argument for the migration of workflows in my current job. Currently we do this:
People code on Netbeans
People click on build on netbeans
Deploy locally
Apply code changes
Netbeans rebuilds and redeploy the code.
Things to know:
It seems tomcat detects when a new WAR is put in the directory and hot-deploys it;
What I aim to automate is not the hot-deploy(since this is already a tomcat feature), but the build process;
We are using Maven to build the project.
I'm using docker-compose to get everything up in one single specification.
So far I was able to containerize the Postgres database, the PGAdmin we use and the initial build of the application using a multi-stage Dockerfile.
Tomcat app Dockerfile
FROM maven AS buildserver
ADD . /usr/src/mymaven/
WORKDIR /usr/src/mymaven
# build the project
RUN mvn -f pom.xml clean package -DskipTests
FROM tomcat:latest
COPY conf-tomcat/tomcat-users.xml /usr/local/tomcat/conf/
COPY conf-tomcat/server.xml /usr/local/tomcat/conf/
COPY conf-tomcat/context.xml /usr/local/tomcat/webapps/manager/META-INF/
# Copy the built war file into webapps folder of tomcat container
COPY --from=buildserver /usr/src/mymaven/target/*.war /usr/local/tomcat/webapps
What I am having trouble with is triggering the rebuild when there's code changes (imitating what netbeans does). I can't find in either maven's or netbeans documentation how that detection and triggering works.
I am using volumes to map the app source directory to the container in hopes that it would just work, but I was wrong.
My docker-compose.yml is as follows:
version: '3'
services:
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
env_file:
- ../db-postgres/pgadmin/pgadmin.env
depends_on:
- pg-dev
networks:
- dev-network
volumes:
- pgadmin-data:/var/lib/pgadmin
ports:
- "88:80"
pg-dev:
container_name: pg-dev
image: pg-dev:latest
env_file:
- ../db-postgres/db-dev/pg-dev.env
volumes:
- pg-data:/var/lib/postgresql/data
networks:
- dev-network
ports:
- "5433:5432"
app:
container_name: app
build: .
volumes:
- app-src:/usr/src/mymaven
- artifacts:/usr/src/mymaven/target
- maven-repo:/root/.m2
networks:
- dev-network
ports:
- "8888:8080"
depends_on:
- pg-dev
volumes:
maven-repo:
driver: local
driver_opts:
type: bind
device: $HOME/.m2
o: bind
app-src:
driver: local
driver_opts:
type: bind
device: .
o: bind
artifacts:
driver: local
driver_opts:
type: bind
device: target/
o: bind
pg-data:
pgadmin-data:
networks:
dev-network:
Any help in coming up with a solution for this is appreciated, as well as any general advice in how to make this workflow/build improve.
UPDATE
I came up with somewhat of a work around, but now I am having problem testing it.
I defined a maven container to work as a build server:
FROM maven
ADD . /usr/src/mymaven/
WORKDIR /usr/src/mymaven
RUN apt update && apt install entr -y
# build the project
RUN mvn -f pom.xml clean package -DskipTests
and now I am defining the entrypoint on the docker-compose.yml:
...
buildserver:
container_name: buildserver
build:
context: .
dockerfile: maven-builder.Dockerfile
volumes:
- app-src:/usr/src/mymaven
- maven-repo:/root/.m2
- artifacts:/usr/src/mymaven/target
networks:
- dev-network
entrypoint: sh -c 'find src/ | entr mvn -f pom.xml clean package -DskipTests --batch-mode'
...
But now I am getting an error message when this container gets up:
find: ‘src/’: No such file or directory
entr: No regular files to watch
Which is weird to me as I successfully build the project in the first run, but the entry-point seems to be failing.
Clarification: What I am being asked is come up with a workflow that removes the need to use the deploy from Netbeans (they want everything automatic). I looked around for a Jenkins workflow, but could not really find a way to achieve the desired results.
According to the Netbeans docs, you can bind Maven goals to IDE actions (http://wiki.netbeans.org/MavenBestPractices section Binding Maven goals to IDE actions):
It's possible to customize the default Maven goal to IDE Action binding from the project's customizer. Right click on the project node and select "Properties" or use the File/Project Properties main menu item to invoke the Project properties dialog. On the left hand side, select the panel named "Actions". The panel lists all available default project actions that can be mapped. When selecting one from the list the textfields in the bottom allow to change the values.
It looks to me that you should bind the Build Project Netbeans action to a specific Maven goal. From this point, it is up to you to come up with a creative solution. You could explore the Maven Exec plugin capabilities and run custom commands during the build proccess (check I want to execute shell commands from Maven's pom.xml). For instance, it should be possible to copy the .war file from target folder to wherever you want on the filesystem, or even execute scripts inside the running container.
PS: It looks like you are trying to do something quite odd, but I'll assume here it makes sense to you solving this somehow.
Context:
I am currently trying to create a Jenkins job that builds periodically and updates the images that are in my docker-compose file. I managed to get a basic version of this to work by labeling my services in my docker-compose.yml. For example:
gitlab:
image: 'gitlab/gitlab-ce:latest'
container_name: 'gitlab'
labels:
update: 'notify'
...
letsencrypt:
image: 'jrcs/letsencrypt-nginx-proxy-companion'
container_name: 'letsencrypt-companion'
labels:
update: 'auto'
...
Notify meant that it should pull new docker images periodically and notify me that an image is ready to be updated. Auto means that it is allowed to automatically deploy the new image.
Problem:
I want to make it so that when new images are pulled Jenkins will automatically notify my that new images are ready / deployed. The problem however is that I have to interpret the output of docker-compose pull and docker-compose up -d to know which images were actually new and deployed. I need a solution that works for a Jenkins pipeline (declarative or scripted)
try to watch this video https://www.youtube.com/watch?v=ZL3hMP9BdmQ; I think that's you are looking for.
https://github.com/v2tec/watchtower
"If you mount the config file as described below, be sure to also prepend the url for the registry when starting up your watched image (you can omit the https://). Here is a complete docker-compose.yml file that starts up a docker container from a private repo at dockerhub and monitors it with watchtower. Note the command argument changing the interval to 30s rather than the default 5 minutes."
version: "3"
services:
cavo:
image: index.docker.io/<org>/<image>:<tag>
ports:
- "443:3443"
- "80:3080"
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30
Installed Drone 0.8 on virtual machine with the following Docker Compose file:
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 8080:8000
- 9000:9000
volumes:
- /var/lib/drone:/var/lib/drone/
restart: always
environment:
- DATABASE_DRIVER=sqlite3
- DATABASE_CONFIG=/var/lib/drone/drone.sqlite
- DRONE_OPEN=true
- DRONE_ORGS=my-github-org
- DRONE_ADMIN=my-github-user
- DRONE_HOST=${DRONE_HOST}
- DRONE_GITHUB=true
- DRONE_GITHUB_CLIENT=${DRONE_GITHUB_CLIENT}
- DRONE_GITHUB_SECRET=${DRONE_GITHUB_SECRET}
- DRONE_SECRET=${DRONE_SECRET}
- GIN_MODE=release
drone-agent:
image: drone/agent:0.8
restart: always
depends_on: [ drone-server ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=${DRONE_SECRET}
All variable values are stored in .env file and are correctly passed to running containers. Trying to run a build using private Github repository. When pushing to repository for the first time build starts and fails with the following error (i.e. build fails):
Then after clicking on Restart button seeing another screen (i.e. build is pending):
Having the following containers running on the same machine:
root#ci:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94e6a266e09d drone/agent:0.8 "/bin/drone-agent" 2 hours ago Up 2 hours root_drone-agent_1
7c7d9f93a532 drone/drone:0.8 "/bin/drone-server" 2 hours ago Up 2 hours 80/tcp, 443/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:8080->8000/tcp root_drone-server_1
Even with DRONE_DEBUG=true the only log entry in agent log is:
2017/09/10 15:11:54 pipeline: request next execution
So I think for some reason my agent does not get the build from the queue. I noticed that latest Drone versions are using GRPC instead of WebSockets.
So how to get the build started? What I am missing here?
The reason of the issue - wrong .drone.yml file. Only the first red screen should be shown in that case. Showing pending and Restart button for incorrect YAML is a Drone issue.
I'm setting up a Sonarqube server and wanted to use the official dockerised version, but when I mount the data folders as volumes to preserve the state between restarts the QualityProfiles page fails to display the profiles.
The profiles are stored though, as I can see them in the 'compare profiles' dropdown, and they are preserved on restart.
I've tried on Safari and Chrome, and the behaviour is consistent.
There are no errors in the logs, and there are, as far as i can see, no permission issues, since the data is being written. Without mounting the data volumes it works fine, but the state is not persisted on restart, which is a deal-breaker..
Anyone else with this problem that has managed to solve it?
This is my docker-compose config
sonarqube:
image: sonarqube:5.1
links:
- db:postgres
ports:
- "9000:9000"
- "9092:9092"
- "5432:5432"
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://postgres:5432/sonar
- SONARQUBE_JDBC_PASSWORD=sonar
- SONARQUBE_JDBC_USERNAME=sonar
volumes:
- /data/sonar:/opt/sonarqube/data
privileged: true
db:
image: postgres
environment:
- POSTGRES_PASSWORD=sonar
- POSTGRES_USER=sonar
volumes:
- /data/postgres:/var/lib/postgresql/data
privileged: true
And the docker versions
[ec2-user#xxxxxxxxx sonar]$ docker version
Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.3.3
Git commit (client): 7c8fca2/1.6.2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.3.3
Git commit (server): 7c8fca2/1.6.2
OS/Arch (server): linux/amd64
The problem is that the sonarqube image doesn't use a volume for the $SONARQUBE_HOME/extensions folder. The first time that you use docker-compose up, sonar initialise the database and install 3 plugins: JAVA, GIT and SVN.
The second time you run docker-compose up, the database is already installed, but the $SONARQUBE_HOME/extensions/plugins folder is empty. So JAVA, GIT and SVN are no more installed and sonarqube disables the rules.
This docker-compose.yml file works as expected.
sonarqube:
image: sonarqube:5.1.1
links:
- db
ports:
- "9000:9000"
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
volumes:
- /data/sonar/data:/opt/sonarqube/data
- /data/sonar/extensions:/opt/sonarqube/extensions
db:
image: postgres
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
ports:
- "5432:5432"
volumes:
- /data/postgres:/var/lib/postgresql/data
It still works after using docker-compose rm and you will be able to add other plugins.
A VOLUME instruction should be used in the Dockerfile that build the image. I will submit it to them.