I have tried to start Apache Airflow UI on my local machine (Windows 11) but have failed so far. Here are the list of works that I have done so far.
The contents of the 'docker-compose-LocalExecutor.yml' file are as follows:
version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
# - ./plugins:/usr/local/airflow/plugins
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
redis:
image: redis
Compose Airflow container:
> cd C:\docker-airflow-master
> docker-compose -f docker-compose-LocalExecutor.yml up -d
Container docker-airflow-master-redis-1 Created
Container docker-airflow-master-postgres-1 Created
Container docker-airflow-master-webserver-1 Running
Container docker-airflow-master-redis-1 Starting
Container docker-airflow-master-postgres-1 Starting
Container docker-airflow-master-redis-1 Started
Container docker-airflow-master-postgres-1 Started
Check which containers are up and running:
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74ac4cd4fafb puckel/docker-airflow:1.10.9 "/entrypoint.sh webs…" 2 days ago Up 40 minutes (healthy) 5555/tcp, 8793/tcp, 0.0.0.0:8080->8080/tcp docker-airflow-master-webserver-1
1acef40c382a postgres:9.6 "docker-entrypoint.s…" 2 days ago Up 3 minutes 5432/tcp docker-airflow-master-postgres-1
e7adfadd1c38 redis "docker-entrypoint.s…" 2 days ago Up 3
minutes 6379/tcp docker-airflow-master-redis-1
Check network connections:
> netstat
Proto Local Address Foreign Address State
TCP [::1]:8080 LAPTOP-0FSNTPS1:63790 TIME_WAIT
TCP [::1]:8080 LAPTOP-0FSNTPS1:63791 TIME_WAIT
TCP [::1]:8080 LAPTOP-0FSNTPS1:63792 TIME_WAIT
Open browser on address localhost:8080:
What could be wrong that it doesn't work?
Have you tried with the latest official docker compose file? It worked fine for me.
Related
I have two, windows-based images that I'm using with docker compose.
The docker-compose.yaml:
services:
application:
image: myapp-win:latest
container_name: "my-app"
# for diagnosis
entrypoint: ["cmd"]
stdin_open: true
tty: true
# /diagnosis
env_file: .myapp/.env
environment:
- POSTGRES_URI=jdbc:postgresql://db0:5432/mydatabase
depends_on:
db0:
condition: service_healthy
db0:
image: stellirin/postgres-windows:10.10
container_name: "my-db"
ports:
- 10000:5432 # this doesn't seem to work in windows
env_file:
- .postgres/.env
volumes:
- .postgres\initdb\:c:\docker-entrypoint-initdb.d\
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "${POSTGRES_DATABASE}", "-U", "${POSTGRES_USER}" ]
timeout: 45s
interval: 10s
retries: 10
restart: unless-stopped
With the two containers started, I accessed the terminal for the my-db container and got its IP address.
Next, I accessed the terminal for the my-app container. I was able to ping the my-db container by its IP address. However, it did not respond by its hostname:
c:\app> ping db0
Ping request could not find host db0.
This is symptommatic why the application can't reach the database using the POSTGRES_URI variable.
Is there a different syntax for the hostname in a Windows container?
** edit **
I'm not able to ping outside the network, from either container:
c:\app> ping 8.8.8.8
Request timed out.
Not sure if this is relevant.
Regardless of container OS, to my knowledge, referring to the other name (db0) directly won't directly work inside the container, but is simply exposed to the other compose entries
Instead, set an env var dependent on the name and read it in the container
environment:
- "ADDRESS_DB=db0"
Then, if you want to be able to ping db0 or similar, have a script set the env var as an available host name on start
Alternatively, you may have success setting it the extra_hosts field, but I haven't tested this and you may need to give it a different name to prevent interpolation
extra_hosts:
- db_url:db0
I've a problem with my Docker.
I use Docker Desktop version 20.10.2, build 2291f61 and docker-compose version 1.27.4, build 40524192 on Windows 10 Pro.
Since a few hours, when I launch any docker-compose and I'm going on my browser to contact localhost:8000, I'm automatically redirect to localhost:8080.
I've delete all my Docker data (images, containers, networks...), I've also reset Docker to factory defaults but that didn't solve anything... I don't know what is going on !
Here, an example of one of my docker-compose.yml :
version: "3.7"
services:
wordpress:
image: wordpress:php7.4-fpm
container_name: e2i-scollado-cours-wordpress
restart: unless-stopped
ports:
- 8000:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: password
WORDPRESS_DB_NAME: wordpress
volumes:
- ./storage/wordpress:/var/www/html
networks:
- e2i-scollado-cours
db:
image: mariadb:10.5
container_name: e2i-scollado-cours-database
restart: unless-stopped
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: password
volumes:
- ./storage/database:/var/lib/mysql
networks:
- e2i-scollado-cours
phpmyadmin:
image: phpmyadmin
container_name: e2i-scollado-cours-phpmyadmin
restart: unless-stopped
ports:
- 8080:80
environment:
- PMA_ARBITRARY=1
- PMA_HOSTS=database
- PMA_USER=wordpress
- PMA_PASSWORD=password
networks:
- e2i-scollado-cours
networks:
e2i-scollado-cours:
driver:
bridge
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e6c54623675 wordpress:php7.4-fpm "docker-entrypoint.s…" 17 minutes ago Up 17 minutes 9000/tcp, 0.0.0.0:8000->80/tcp e2i-scollado-cours-wordpress
bbfb12fa4c14 mariadb:10.5 "docker-entrypoint.s…" 17 minutes ago Up 17 minutes 0.0.0.0:3306->3306/tcp e2i-scollado-cours-database
243fc759179c phpmyadmin "/docker-entrypoint.…" 17 minutes ago Up 17 minutes 0.0.0.0:8080->80/tcp e2i-scollado-cours-phpmyadmin
If I want to see Wordpress on my browser, I'm automatically redirect to PHPMyAdmin in this case.
But, even if I don't had a service on the port 8080, I'm still redirect to the port 8080.
Also, I've this problem with all of my docker-compose, here, it's just an example...
Please, if anyone have an answer, help me please ^^
Thank's
I'm attempting to put a ruby Cucumber test into Docker. I'm using a docker-compose.yml file to start a selenium hub container along with a chrome and firefox node. Then I'm building an alpine ruby based image with my tests.
I've gotten the process to work, however it involves finding the IP of the hub container each time it is built, and then hardcoding the IP into my env.rb file where I connect to the Selenium grid.
I've seen that containers that are linked can be connected using the name but haven't had much luck there. Is there any way I can easily pass the hub container IP to my test's container?
Here is my yml file:
version: "3"
services:
hub:
image: selenium/hub
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 16
GRID_BROWSER_TIMEOUT: 3000
GRID_TIMEOUT: 3000
chrome:
image: selenium/node-chrome
container_name: web-automation_chrome
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 4
NODE_MAX_INSTANCES: 4
volumes:
- /dev/shm:/dev/shm
ports:
- "9001:5900"
links:
- hub
firefox:
image: selenium/node-firefox
container_name: web-automation_firefox
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 2
NODE_MAX_INSTANCES: 2
volumes:
- /dev/shm:/dev/shm
ports:
- "9002:5900"
links:
- hub
myapp:
build: .
image: justinpshields/myapp
depends_on:
- hub
environment:
URL: hub
links:
- hub
networks:
default:
links is useless. Every container in a docker-compose.yml share the same network unless stated otherwise.
You should also wait until the selenium hub start and attach its browsers containers.
For instance with that:
while ! curl -sSL "http://$SELENIUMHUBHOST:4444/status" 2>&1 | grep "\"ready\": true" >/dev/null; do
echo 'Waiting for the Grid'
sleep 1
done
while ! curl -sSL "http://$SELENIUMHUBHOST:4444/status" 2>&1 | grep "\"browserName\": \"$BROWSER\"" >/dev/null; do
echo "Waiting for the node $BROWSER"
sleep 1
done
when entering ddev start in terminal, i get the error
Failed to start xxx: web container failed: log=, err=container exited, please use 'ddev logs -s web` to find out why it failed
the error log goes
...
+ disable_xdebug
Disabled xdebug
+ ls /var/www/html
ls: cannot open directory '/var/www/html': Stale file handle
/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting
+ echo '/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting'
+ exit 101
and i dunno what to do here. the directory /var/www does not exist and it does not help to create it. searching the web does not bring any valuable information, only thing i found is this
ls /var/www/html >/dev/null || (echo "/var/www/html does not seem to be healthy/mounted; docker may not be mounting it., exiting" && exit 101)
but i have no clue, what it means, nor does it explain, what to do..
this is project related, i have docker/ddev running fine in other projects, but this one is haunted or something..
my config.yaml
APIVersion: v1.12.2
name: xxx
type: php
docroot: public
php_version: "7.2"
webserver_type: nginx-fpm
router_http_port: "80"
router_https_port: "443"
xdebug_enabled: false
additional_hostnames: []
additional_fqdns: []
mariadb_version: "10.2"
nfs_mount_enabled: true
provider: default
use_dns_when_possible: true
timezone: ""
docker-compose.yaml
web:
container_name: ddev-${DDEV_SITENAME}-web
build:
context: '/Users/jnz/Documents/xxx/.ddev/.webimageBuild'
args:
BASE_IMAGE: $DDEV_WEBIMAGE
username: 'jb'
uid: '504'
gid: '20'
image: ${DDEV_WEBIMAGE}-built
cap_add:
- SYS_PTRACE
volumes:
- type: volume
source: nfsmount
target: /var/www/html
volume:
nocopy: true
- ".:/mnt/ddev_config:ro"
- ddev-global-cache:/mnt/ddev-global-cache
- ddev-ssh-agent_socket_dir:/home/.ssh-agent
restart: "no"
user: "$DDEV_UID:$DDEV_GID"
hostname: xxx-web
links:
- db:db
# ports is list of exposed *container* ports
ports:
- "127.0.0.1:$DDEV_HOST_WEBSERVER_PORT:80"
- "127.0.0.1:$DDEV_HOST_HTTPS_PORT:443"
environment:
- DOCROOT=$DDEV_DOCROOT
- DDEV_PHP_VERSION=$DDEV_PHP_VERSION
- DDEV_WEBSERVER_TYPE=$DDEV_WEBSERVER_TYPE
- DDEV_PROJECT_TYPE=$DDEV_PROJECT_TYPE
- DDEV_ROUTER_HTTP_PORT=$DDEV_ROUTER_HTTP_PORT
- DDEV_ROUTER_HTTPS_PORT=$DDEV_ROUTER_HTTPS_PORT
- DDEV_XDEBUG_ENABLED=$DDEV_XDEBUG_ENABLED
- DOCKER_IP=127.0.0.1
- HOST_DOCKER_INTERNAL_IP=
- DEPLOY_NAME=local
- VIRTUAL_HOST=$DDEV_HOSTNAME
- COLUMNS=$COLUMNS
- LINES=$LINES
- TZ=
# HTTP_EXPOSE allows for ports accepting HTTP traffic to be accessible from <site>.ddev.site:<port>
# To expose a container port to a different host port, define the port as hostPort:containerPort
- HTTP_EXPOSE=${DDEV_ROUTER_HTTP_PORT}:80,${DDEV_MAILHOG_PORT}:8025
# You can optionally expose an HTTPS port option for any ports defined in HTTP_EXPOSE.
# To expose an HTTPS port, define the port as securePort:containerPort.
- HTTPS_EXPOSE=${DDEV_ROUTER_HTTPS_PORT}:80
- SSH_AUTH_SOCK=/home/.ssh-agent/socket
- DDEV_PROJECT=xxx
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.platform: ddev
com.ddev.app-type: php
com.ddev.approot: $DDEV_APPROOT
external_links:
- "ddev-router:xxx.ddev.site"
healthcheck:
interval: 1s
retries: 10
start_period: 10s
timeout: 120s
So as #rfay pointed out in the comments, the problem was caused by macOS catalina directory restrictions.
i had to go to system settings > security > privacy > files & folders and add /sbin/nfsd. it now has full hd access.
besides that i granted docker access to documents.
now ddev is up and running, even in folders inside User/xxx/Documents.
I want to create 3 node elasticsearch cluster with 1 master node and 2 worker node. ES v6 and Swarm v1.18. Anyone could help?
You need create a stack of elasticsearch with 3 services.
Create file 'elasticsearch-swarm.yaml'
sudo nano elasticsearch-swarm.yaml
Type the instruction
version: '3.7'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
hostname: elasticsearch1
volumes:
- elasticsearch1-data:/usr/share/elasticsearch/data
environment:
- cluster.name=elasticsearch-cluster
- "discovery.zen.ping.unicast.hosts=tasks.elasticsearch1"
- "network.host=0.0.0.0"
- "node.max_local_storage_nodes=2"
ports:
- "9200:9200"
networks:
- elasticsearch_distributed
deploy:
replicas: 3
restart_policy:
delay: 30s
max_attempts: 10
window: 120s
volumes:
elasticsearch1-data:
networks:
elasticsearch_distributed:
driver: overlay
Deploy stack file
sudo docker stack deploy --compose-file=elasticsearch-swarm.yaml elasticsearch
This command will create 3 replicas of elasticsearch server inside a same cluster.
If you receive a error that you don't have max_map_count enough and ask to set at least 262144 execute the bellow steps:
Edit file /etc/sysctl.conf
sudo nano /etc/sysctl.conf
Add the key in the end of file
vm.max_map_count=262144
Apply settings to the current instance
sudo sysctl -w vm.max_map_count=262144