I'm trying to connect golang and reds through Docker using docker-compose but I'm not having much luck. I have published my attempt at https://github.com/davidwilde/docker-compose-golang-redis/tree/stackoverflow_question and listed the logs below.
Redis says it is ready to accept connections but my golang app using gopkg.in/redis.v3 says no.
~/workspace/composetest master ● docker-compose up
Starting composetest_db_1...
Starting composetest_web_1...
.
.
.
ur kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
db_1 | 1:M 20 Nov 05:58:33.371 * DB loaded from disk: 0.000 seconds
db_1 | 1:M 20 Nov 05:58:33.371 * The server is now ready to accept connections on port 6379
web_1 | panic: dial tcp [::1]:6379: getsockopt: connection refused
web_1 |
web_1 | goroutine 1 [running]:
web_1 | main.main()
web_1 | /go/src/app/app.go:19 +0x131
web_1 |
web_1 | goroutine 17 [syscall, locked to thread]:
web_1 | runtime.goexit()
web_1 | /usr/local/go/src/runtime/asm_amd64.s:1696 +0x1
web_1 | panic: dial tcp [::1]:6379: getsockopt: connection refused
web_1 |
web_1 | goroutine 1 [running]:
web_1 | main.main()
web_1 | /go/src/app/app.go:19 +0x131
web_1 |
web_1 | goroutine 17 [syscall, locked to thread]:
web_1 | runtime.goexit()
web_1 | /usr/local/go/src/runtime/asm_amd64.s:1696 +0x1
So we have two different containers which means two different "localhost" in this case.
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "",
DB: 0,
})
So, your app is making requests to its own sandboxed container, not to your "other" sandboxed container which includes redis.
You have two options;
Give a mapping in your compose file like redisdb:db and pass that information instead of localhost.
Or, use the "--net=host" option in order to provide common networking for your containers without changing your code.
edit: typo
The answer from #Gladmir is great. Just to expand on his/her answer, I didn't need to remove localhost from my Golang code:
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "",
DB: 0,
})
I changeed my Docker Compose file to use network_mode: "host":
version: "3.9"
services:
web:
build:
context: .
network_mode: "host"
redis:
container_name: "redis"
image: "redis:alpine"
command: redis-server /usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
volumes:
- $PWD/configs/redis.conf:/usr/local/etc/redis/redis.conf
Related
i have configured haproxy as the load balancer for two containerised spring boot application
Below is the sample docker compose file configuration
version: '3.3'
services:
wechat-1:
image: xxxxxx/wechat-social-connector:2.0.0
container_name: wechat-1
ports:
- 81:8000
networks:
- web
#depends_on:
#- wechat-2
wechat-2:
image: xxxxxxxxx/wechat-social-connector:2.0.0
container_name: wechat-2
ports:
- 82:8000
networks:
- web
haproxy:
build: ./haproxy
container_name: haproxy
ports:
- 80:80
networks:
- web
#depends_on:
#- wechat-1
networks:
web:
Docker file
FROM haproxy:2.1.4
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
HA Configuration file
global
debug
daemon
maxconn 2000
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
mode http
option httpchk
balance roundrobin
server wechat-1 wechat-1:81 check
server wechat-2 wechat-2:82 check
when i am trying to access my endpoints using the port number 80 i always getting the service unavailable.
After debugging from the haproxy logs noticed the below error
Creating haproxy ... done
Creating wechat-2 ... done
Creating wechat-1 ... done
Attaching to wechat-2, wechat-1, haproxy
haproxy | Available polling systems :
haproxy | epoll : pref=300, test result OK
haproxy | poll : pref=200, test result OK
haproxy | select : pref=150, test result FAILED
haproxy | Total: 3 (2 usable), will use epoll.
haproxy |
haproxy | Available filters :
haproxy | [SPOE] spoe
haproxy | [CACHE] cache
haproxy | [FCGI] fcgi-app
haproxy | [TRACE] trace
haproxy | [COMP] compression
haproxy | Using epoll() as the polling mechanism.
haproxy | [NOTICE] 144/185524 (1) : New worker #1 (8) forked
haproxy | [WARNING] 144/185524 (8) : Server servers/wechat-1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy | [WARNING] 144/185525 (8) : Server servers/wechat-2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy | [ALERT] 144/185525 (8) : backend 'servers' has no server available!
from the logs i understand when haproxy is not able to connect the other two containers which are running perfectly with out any issues.
i tired to use the depends_on attribute(commented for time being) still the issue same .
Can some one help me in fixing this issue?
Please try the below configuration. Few changes in the haproxy.cfg
docker-compose.yaml
version: '3.3'
services:
wechat-1:
image: nginx
container_name: wechat-1
ports:
- 81:80
networks:
- web
depends_on:
- wechat-2
wechat-2:
image: nginx
container_name: wechat-2
ports:
- 82:80
networks:
- web
haproxy:
build: ./haproxy
container_name: haproxy
ports:
- 80:80
networks:
- web
depends_on:
- wechat-1
networks:
web:
Dockerfile
FROM haproxy
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
haproxy.cfg
global
debug
daemon
maxconn 2000
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
mode http
option forwardfor
balance roundrobin
server wechat-1 wechat-1:80 check
server wechat-2 wechat-2:80 check
Logs of HAPROXY
Attaching to wechat-2, wechat-1, haproxy
haproxy | Using epoll() as the polling mechanism.
haproxy | Available polling systems :
haproxy | epoll : pref=300, test result OK
haproxy | poll : pref=200, test result OK
haproxy | select : pref=150, test result FAILED
haproxy | Total: 3 (2 usable), will use epoll.
haproxy |
haproxy | Available filters :
haproxy | [SPOE] spoe
haproxy | [CACHE] cache
haproxy | [FCGI] fcgi-app
haproxy | [TRACE] trace
haproxy | [COMP] compression
haproxy | [NOTICE] 144/204217 (1) : New worker #1 (6) forked
I'm using docker-compose to run three containers. Two of them depends on database so I'm using wait-for-it.sh to make sure they are not run until database is listening.
This is my docker-compose.yml file:
web:
build: ./docker/web
command: ["./wait-for-it.sh", "db:5432", "--", "python", "manage.py", "runserver", "0.0.0.0:8080"]
ports:
- "8080:8080"
depends_on:
- db
- spider
links:
- db
When I run docker-compose up command I get the error:
web_1 | wait-for-it.sh: waiting 15 seconds for db:5432
web_1 | wait-for-it.sh: db:5432 is available after 0 seconds
web_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory
When I add volume .:/src the manage.py is found but wait-for-it.sh isn't:
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"./wait-for-it.sh\": stat ./wait-for-it.sh: no such file or directory": unknown
I added wait-for-it.sh file to the directory where Dockerfile for web service is.
Any idea how can I make this work?
EDIT
Here's the Dockerfile used in docker-compose:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /src
COPY . /src
WORKDIR /src
RUN pip install -r requirements.txt
I fixed it by changing approach. Added healthcheck to db service:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5432"]
interval: 5s
timeout: 30s
retries: 5
And restart policies to other services:
restart: on-failure
I'm trying to test fabric chaincode example02, with docker. I'm newbie :)
This is my docker-compose.yml :
membersrvc:
image: hyperledger/fabric-membersrvc
command: membersrvc
vp0:
image: hyperledger/fabric-peer
environment:
- CORE_PER_ID=vp0
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://0.0.0.0:2375
- CORE_LOGGING_LEVEL=DEBUG
command: sh -c "sleep 5; peer node start --peer-chaincodedev"
vp1:
extends:
service: vp0
environment:
- CORE_PEER_ID=vp1
- CORE_PEER_DISCOVERY_ROOTNODE=vp0:7051
links:
- vp0
vp2:
extends:
service: vp0
environment:
- CORE_PEER_ID=vp2
- CORE_PEER_DISCOVERY_ROOTNODE=vp0:7051
links:
- vp0
and I run (I refered to Fabric chaincode setup page):
Terminal 1 :
$ docker-compose up
Terminal 2 :
$ cd /hyperledger/examples/chaincode/go/chaincode_example02
$ CORE_CHAINCODE_ID_NAME=mycc CORE_PEER_ADDRESS=0.0.0.0:7051 ./chaincode_example02
Terminal 3 :
$ peer chaincode deploy -n mycc -c '{"Args": ["init", "a","100", "b", "200"]}'
It works well in terminal 1,2. But terminal 3 shows connection error.
2016/10/21 04:39:15 grpc: addrConn.resetTransport failed to create client
transport: connection error: desc = "transport: dial tcp 0.0.0.0:7051:
getsockopt: connection refused"; Reconnecting to {"0.0.0.0:7051" <nil>}
Error: Error building chaincode: Error trying to connect to local peer:
grpc: timed out when dialing
What's the problem?
It seems you are missing the compose statements to map the required ports from the docker container to the host machine (where you are trying out the peer command ). So its possible that the peer process is listening on port 7051 inside your peer docker container, but this connection is not available to the peer command used outside of this container in terminal 3.
You can map ports using the 'ports' tag. eg:
membersrvc:
image: hyperledger/fabric-membersrvc
ports:
- "7054:7054"
command: membersrvc
vp0:
image: hyperledger/fabric-peer
ports:
- "7050:7050"
- "7051:7051"
- "7053:7053"
environment:
- CORE_PER_ID=vp0
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://0.0.0.0:2375
- CORE_LOGGING_LEVEL=DEBUG
command: sh -c "sleep 5; peer node start --peer-chaincodedev"
Before you do peer chaincode deploy ...in terminal 3, you can check if a the peer process is listening on port 7051 using
netstat -lnptu |grep 7051
I try to install magento via docker and i got errors. I follow this tutorial
https://github.com/andreaskoch/dockerized-magento
WARNING: Image for service installer was built because it did not already exist. To rebuild this image you must use docker-compose build or docker-compose up --build.
Creating dockerizedmagento_mysql_1
Creating dockerizedmagento_fullpagecache_1
Creating dockerizedmagento_sessions_1
Creating dockerizedmagento_cache_1
Creating dockerizedmagento_solr_1
Creating dockerizedmagento_php_1
Creating dockerizedmagento_nginx_1
ERROR: for nginx driver failed programming external connectivity on endpoint dockerizedmagento_nginx_1 (f4231c1e80b680a8a59961cb3658c55d2b64cc3a3e980e09f8b94531a608892f): Error starting userland proxy: listen tcp 0.0.0.0:80: listen: address already in use
Traceback (most recent call last):
File "", line 3, in
File "compose/cli/main.py", line 63, in main
AttributeError: 'ProjectError' object has no attribute 'msg'
docker-compose returned -1
caner#vegan:~/magento-caner/dockerized-magento$ ./magento start
dockerizedmagento_fullpagecache_1 is up-to-date
dockerizedmagento_solr_1 is up-to-date
dockerizedmagento_mysql_1 is up-to-date
dockerizedmagento_sessions_1 is up-to-date
dockerizedmagento_cache_1 is up-to-date
dockerizedmagento_php_1 is up-to-date
Starting dockerizedmagento_nginx_1
ERROR: for nginx driver failed programming external connectivity on endpoint dockerizedmagento_nginx_1 (3e4887cb50ff899b19f660d3886f03d6682cf6332373912c0aa9e3932f4d8e5c): Error starting userland proxy: listen tcp 0.0.0.0:80: listen: address already in use
Traceback (most recent call last):
File "", line 3, in
File "compose/cli/main.py", line 63, in main
AttributeError: 'ProjectError' object has no attribute 'msg'
docker-compose returned -1
caner#vegan:~/magento-caner/dockerized-magento$
after a whihle, the error is when i open the page
403 Forbidden
nginx/1.11.3
which are given from console :
installer_1 |
installer_1 | Fixing filesystem permissions
installer_1 | Installation fininished
installer_1 | Frontend: http://dockerized-magento.local/
installer_1 | Backend: http://dockerized-magento.local/admin
installer_1 | - Username: admin
installer_1 | - Password: password123
images are
fabf508632a6 dockerizedmagentomaster_installer "/bin/install.sh" 4 minutes ago Up 4 minutes dockerizedmagentomaster_installer_1
44a8db375577 nginx:latest "nginx -g 'daemon off" 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp dockerizedmagentomaster_nginx_1
73883d1ad863 dockerizedmagentomaster_php "php-fpm" 4 minutes ago Up 4 minutes 9000/tcp dockerizedmagentomaster_php_1
6c7e24a5f1b6 dockerizedmagentomaster_solr "/usr/bin/java -Xmx10" 4 minutes ago Up 4 minutes 8983/tcp dockerizedmagentomaster_solr_1
824765d3e79d mysql:5.5 "docker-entrypoint.sh" 4 minutes ago Up 4 minutes 0.0.0.0:3306->3306/tcp dockerizedmagentomaster_mysql_1
515fde576688 redis:latest "docker-entrypoint.sh" 28 minutes ago Up 4 minutes 6379/tcp dockerizedmagentomaster_sessions_1
e6a583fda276 redis:latest "docker-entrypoint.sh" 28 minutes ago Up 4 minutes 6379/tcp dockerizedmagentomaster_cache_1
e449459d1355 redis:latest "docker-entrypoint.sh" 28 minutes ago Up 4 minutes 6379/tcp dockerizedmagentomaster_fullpagecache_1
Also
[RuntimeException]
installer_1 | Magento folder could not be detected
installer_1 |
Also i tried with my env and yml:
yml
web:
image: alexcheng/magento
ports:
- "8081:80"
links:
- mysql
env_file:
- env
mysql:
image: mysql:5.6.23
env_file:
- env
ports:
- "3306:3306
env
MYSQL_DATABASE=magento
MYSQL_USER=magento
MYSQL_PASSWORD=magento
MYSQL_ROOT_PASSWORD=magento
MAGENTO_ADMIN_FIRSTNAME=Admin
MAGENTO_ADMIN_LASTNAME=MyStore
MAGENTO_ADMIN_EMAIL=admin#example.com
MAGENTO_ADMIN_USERNAME=admin
MAGENTO_ADMIN_PASSWORD=magentorocks1
this time when i go to
http://localhost:8081/
it forwards me to
http://localhost:8081/index.php/install/wizard/config/
so that i can start installation wizard bt at that time, it doesno accept my username or another fields.
It says database connection error.
I'm trying to test out concourse on an ubuntu 14.04 ec2 instance. I am attempting to use the containerized version of the software with the docker-compose example shown here in the documentation. However on any attempt the
concourse-web container fails after about 15 seconds. I am just looking for a quick easy setup of concourse on ec2 so I can test it out, how can I get it running using the containerized version of the software?
More info:
Here is the script I am using to get it up and running:
mkdir concourse
cd concourse
mkdir -p keys/web keys/worker
ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''
ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''
cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker
# for ec2
export CONCOURSE_EXTERNAL_URL=$(wget -q -O - http://instance-data/latest/meta-data/public-ipv4)
#creating docker compose file
echo 'concourse-db:
image: postgres:9.5
environment:
POSTGRES_DB: concourse
POSTGRES_USER: concourse
POSTGRES_PASSWORD: changeme
PGDATA: /database
concourse-web:
image: concourse/concourse
links: [concourse-db]
command: web
ports: ["8080:8080"]
volumes: ["./keys/web:/concourse-keys"]
environment:
CONCOURSE_BASIC_AUTH_USERNAME: concourse
CONCOURSE_BASIC_AUTH_PASSWORD: changeme
CONCOURSE_EXTERNAL_URL: "${CONCOURSE_EXTERNAL_URL}"
CONCOURSE_POSTGRES_DATA_SOURCE: |
postgres://concourse:changeme#concourse-db:5432/concourse?sslmode=disable
concourse-worker:
image: concourse/concourse
privileged: true
links: [concourse-web]
command: worker
volumes: ["./keys/worker:/concourse-keys"]
environment:
CONCOURSE_TSA_HOST: concourse-web' > docker-compose.yml
docker-compose up -d
However about 15 seconds after doing a docker-compose up -d the concorse_concourse-web_1 container stops running and I cannot connect to it through a browser at any point. Here are the docker logs of the container at the end right when it fails (there's more but I cant fit it all so just test it yourself to see the full logs):
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x5e093a]
goroutine 1 [running]:
panic(0xfba6c0, 0xc820016070)
/usr/local/go/src/runtime/panic.go:481 +0x3e6
github.com/concourse/atc/atccmd.(*ATCCommand).constructAPIHandler(0xc82023c608, 0x7ff484d1b5d0, 0xc8200501e0, 0xc82026f0e0, 0xc8202c9300, 0x7ff484d1d858, 0xc82030c5c0, 0x7ff484d1d980, 0xc8202afda0, 0x7ff484d1d958, ...)
/tmp/build/9674af12/concourse/src/github.com/concourse/atc/atccmd/command.go:787 +0x121a
github.com/concourse/atc/atccmd.(*ATCCommand).Runner(0xc82023c608, 0xc820270d30, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0)
/tmp/build/9674af12/concourse/src/github.com/concourse/atc/atccmd/command.go:221 +0xe44
main.(*WebCommand).Execute(0xc82023c608, 0xc820270d30, 0x0, 0x1, 0x0, 0x0)
/tmp/build/9674af12/gopath/src/github.com/concourse/bin/cmd/concourse/web.go:54 +0x297
github.com/concourse/bin/vendor/github.com/vito/twentythousandtonnesofcrudeoil.installEnv.func2(0x7ff484d0b5e0, 0xc82023c608, 0xc820270d30, 0x0, 0x1, 0x0, 0x0)
/tmp/build/9674af12/gopath/src/github.com/concourse/bin/vendor/github.com/vito/twentythousandtonnesofcrudeoil/environment.go:30 +0x81
github.com/concourse/bin/vendor/github.com/jessevdk/go-flags.(*Parser).ParseArgs(0xc8200512c0, 0xc82000a150, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/tmp/build/9674af12/gopath/src/github.com/concourse/bin/vendor/github.com/jessevdk/go-flags/parser.go:312 +0xa34
github.com/concourse/bin/vendor/github.com/jessevdk/go-flags.(*Parser).Parse(0xc8200512c0, 0x0, 0x0, 0x0, 0x0, 0x0)
/tmp/build/9674af12/gopath/src/github.com/concourse/bin/vendor/github.com/jessevdk/go-flags/parser.go:185 +0x9b
main.main()
/tmp/build/9674af12/gopath/src/github.com/concourse/bin/cmd/concourse/main.go:29 +0x10d
Also after trying to stop and remove the containers the concorse_concourse-worker_1 container cannot be removed and shows up in a docker ps -a as Dead. The following error message occurs when attempting to remove it:
ubuntu#ip-172-31-59-167:~/concorse$ docker rm a005503d568b
Error response from daemon: Driver aufs failed to remove root filesystem a005503d568b4931f860334e95ff37265dc0913083d3592f0291e023275bbf20: rename /var/lib/docker/aufs/diff/9bcff3a39934ea3525bf8a06ef900bf9dfba59a5187747beb65e9ba5709ebf75 /var/lib/docker/aufs/diff/9bcff3a39934ea3525bf8a06ef900bf9dfba59a5187747beb65e9ba5709ebf75-removing: device or resource busy
The documentation on this has been updated with more succinct instructions.