Installed Drone 0.8 on virtual machine with the following Docker Compose file:
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 8080:8000
- 9000:9000
volumes:
- /var/lib/drone:/var/lib/drone/
restart: always
environment:
- DATABASE_DRIVER=sqlite3
- DATABASE_CONFIG=/var/lib/drone/drone.sqlite
- DRONE_OPEN=true
- DRONE_ORGS=my-github-org
- DRONE_ADMIN=my-github-user
- DRONE_HOST=${DRONE_HOST}
- DRONE_GITHUB=true
- DRONE_GITHUB_CLIENT=${DRONE_GITHUB_CLIENT}
- DRONE_GITHUB_SECRET=${DRONE_GITHUB_SECRET}
- DRONE_SECRET=${DRONE_SECRET}
- GIN_MODE=release
drone-agent:
image: drone/agent:0.8
restart: always
depends_on: [ drone-server ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=${DRONE_SECRET}
All variable values are stored in .env file and are correctly passed to running containers. Trying to run a build using private Github repository. When pushing to repository for the first time build starts and fails with the following error (i.e. build fails):
Then after clicking on Restart button seeing another screen (i.e. build is pending):
Having the following containers running on the same machine:
root#ci:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94e6a266e09d drone/agent:0.8 "/bin/drone-agent" 2 hours ago Up 2 hours root_drone-agent_1
7c7d9f93a532 drone/drone:0.8 "/bin/drone-server" 2 hours ago Up 2 hours 80/tcp, 443/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:8080->8000/tcp root_drone-server_1
Even with DRONE_DEBUG=true the only log entry in agent log is:
2017/09/10 15:11:54 pipeline: request next execution
So I think for some reason my agent does not get the build from the queue. I noticed that latest Drone versions are using GRPC instead of WebSockets.
So how to get the build started? What I am missing here?
The reason of the issue - wrong .drone.yml file. Only the first red screen should be shown in that case. Showing pending and Restart button for incorrect YAML is a Drone issue.
Related
I have TeamCity setup in docker-compose.yml
version: "3"
services:
server:
image: jetbrains/teamcity-server:2021.1.2
ports:
- "8112:8111"
volumes:
- ./data_dir:/data/teamcity_server/datadir
- ./log_dir:/opt/teamcity/logs
db:
image: mysql
ports:
- "3306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=111
- MYSQL_DATABASE=teamcity
teamcity-agent-1:
image: jetbrains/teamcity-agent:2021.1.2-linux-sudo
environment:
- SERVER_URL=http://server:8111
- AGENT_NAME=docker-agent-1
- DOCKER_IN_DOCKER=start
privileged: true
container_name: docker_agent_1
ipc: host
shm_size: 1024M
teamcity-agent-2:
image: jetbrains/teamcity-agent:2021.1.2-linux-sudo
environment:
- SERVER_URL=http://server:8111
- AGENT_NAME=docker-agent-2
- DOCKER_IN_DOCKER=start
privileged: true
container_name: docker_agent_2
ipc: host
shm_size: 1024M
teamcity-agent-3:
image: jetbrains/teamcity-agent:2021.1.2-linux-sudo
environment:
- SERVER_URL=http://server:8111
- AGENT_NAME=docker-agent-3
- DOCKER_IN_DOCKER=start
privileged: true
container_name: docker_agent_3
ipc: host
shm_size: 1024M
and I have E2E tests which I run in teamcity agents. As a result of tests execution they generate HTML report and in case tests are failed they generate video report as well. Everything working as expected locally without TeamCity. When I move it to TeamCity I setup to keep folder "reports" in artifacts. And I have the following behaviour in fact:
HTML reports are coming everytime updated
videos keep growing from build to build. I generate diff path with timestamp for folder name and for video names to avoid cache. If 1 test was failed and generated 1 video this video will come to artifacts of all next builds even they are passing and video folder should be empty
My question described exactly in jetbrains support in 2014
https://teamcity-support.jetbrains.com/hc/en-us/community/posts/206845765-Build-Agent-Artifacts-Cache-Cleanup
but I tried diff settings from there and there is no luck unfortunatelly
What I tried myself and what did not help:
tried to clean \system. artifacts_cache folder. Artifacts are still growing
tried to find a config for agent
in /data/teamcity_agent/conf/buildAgent.properties I place 2 new settings
teamcity.agent.filecache.publishing.disabled=true
teamcity.agent.filecache.size.limit.bytes=1
after agent restarting I see those 2 new settings in TeamCity webinterface which means that settings were applied
but behaviour is still the same. Maybe other settings should be used but I did not manage to find
what helps is pressing "Clean sources on this agent" in agent settings but press by hands it is not the way
It looks like a cache issue cause if I assign another agent accumulation starts from the beginning.
any suggestions are appeciated
Seems like I found an answer
https://www.jetbrains.com/help/teamcity/2021.1/clean-checkout.html#Automatic+Clean+Checkout
"Clean all files before build" option should be selected on the Create/Edit Build Configuration > Version Control Settings page
I am trying to set up tests for my Laravel application.
The application runs with Docker compose.
When I try to start my tests with this command:
docker-compose -p tests --env-file .env_tests --rm run myapp ./vendor/bin/phpunit
the tests start to run before the database container is ready.
How can I make my tests wait for the database to become ready?
My docker-compose.yml looks like this:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:10.1'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=my_user
- MARIADB_DATABASE=my_database
- MARIADB_PASSWORD=my_password
ports:
# connect your dbeaver/workbench to localhost:${WORKBENCH_PORT}
- ${WORKBENCH_PORT}:3306
# volumes:
# Do not load databases here, as there is no
# good way for other containers to wait for this to finish
# - ./database:/docker-entrypoint-initdb.d
myapp:
tty: true
image: bitnami/laravel:6-debian-9
environment:
- DB_HOST=mariadb
- DB_USERNAME=my_user
- DB_DATABASE=my_database
- DB_PASSWORD=my_password
depends_on:
- mariadb
ports:
- 3000:3000
volumes:
- ./:/app
When I start the application normally (docker-compose up), Laravel waits for the mariadb container to finish loading, but I couldn't find out how this is done.
---- Edit ----
I found that the bitami/laravel Docker container that I use has a script called wait_for_db() that seems to wait for the database.
What I didn't find out yet is why this script is run in normal mode, but not when I start the tests.
According to the official docs, it is not possible to wait until the database is ready, but only until it has started:
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
(...)
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason.
The difference in your app's behaviour between the general case and the test case may be related to other reasons, such as the test taking less time to load (giving less time to the database to get ready) or test handling connection failure in a different way (not retrying after some time).
EDIT
Using docker-compose run overrides the entrypoint of the container. Therefore, even if originally there was a script intended to wait for the database initialization, it will not be run.
Check the docs of the command:
First, the command passed by run overrides the command defined in the service configuration. For example, if the web service configuration is started with bash, then docker-compose run web python app.py overrides it with python app.py.
I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?
OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.
Context:
I am currently trying to create a Jenkins job that builds periodically and updates the images that are in my docker-compose file. I managed to get a basic version of this to work by labeling my services in my docker-compose.yml. For example:
gitlab:
image: 'gitlab/gitlab-ce:latest'
container_name: 'gitlab'
labels:
update: 'notify'
...
letsencrypt:
image: 'jrcs/letsencrypt-nginx-proxy-companion'
container_name: 'letsencrypt-companion'
labels:
update: 'auto'
...
Notify meant that it should pull new docker images periodically and notify me that an image is ready to be updated. Auto means that it is allowed to automatically deploy the new image.
Problem:
I want to make it so that when new images are pulled Jenkins will automatically notify my that new images are ready / deployed. The problem however is that I have to interpret the output of docker-compose pull and docker-compose up -d to know which images were actually new and deployed. I need a solution that works for a Jenkins pipeline (declarative or scripted)
try to watch this video https://www.youtube.com/watch?v=ZL3hMP9BdmQ; I think that's you are looking for.
https://github.com/v2tec/watchtower
"If you mount the config file as described below, be sure to also prepend the url for the registry when starting up your watched image (you can omit the https://). Here is a complete docker-compose.yml file that starts up a docker container from a private repo at dockerhub and monitors it with watchtower. Note the command argument changing the interval to 30s rather than the default 5 minutes."
version: "3"
services:
cavo:
image: index.docker.io/<org>/<image>:<tag>
ports:
- "443:3443"
- "80:3080"
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30
I am using the docker-compose 'recipe' below to bring up a container that runs a component of the storm stream processing framework. I am finding that on Mac's
when i enter the container (once it is up and running via docker exec -t -i <container-id> bash)
and I do ping storm-supervisor I get the error
'unknown host'. However, when i run the same docker-compose script on Linux
the host is recognized and ping succeeds.
The failure to resolve the host leads to problems with the Storm component... but what
that component is doing can be ignored for this question. I'm pretty sure if I figured out
how to get the Mac's docker-compose behavior to match Linux's then I would have no problem.
I think i am experiencing the issue mentioned in this post:
https://forums.docker.com/t/docker-compose-not-setting-hostname-when-network-mode-host/16728
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
network_mode: host
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
thanks in advance for any leads or tips !
"network_mode: host" will not work well on docker mac. I experienced the same issue where I had few of my containers in bridge network and the others in host network.
However, you can move all your containers to a custom bridge network. It solved for me.
You can edit your docker-compose.yml file to have a custom bridge network.
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
networks:
- storm
networks:
storm:
external: true
Also, execute the below command to create the custom network.
docker network create storm
You can verify it by
docker network ls
Hope it helped.