Golang with Cassandra db using docker-compose : cannot connect (gocql) - go

I am trying to setup a cassandra DB and connect to it with a golang app.
this is my docker-compose
version: "3.6"
services:
cassandra:
image: cassandra:4.0
ports:
- 9042:9042
volumes:
- ~/apps/cassandra:/var/lib/cassandra
environment:
- CASSANDRA_CLUSTER_NAME=mycluster
myapp:
...
ports:
- 4242:4242
- 4243:4243
depends_on:
- cassandra
...
networks:
default:
driver: bridge
I start the Cassandra using
docker-compose up cassandra
and then I wait it to be ready.
Then I try to connect to Cassandra in local using
> cqlsh
Connected to mycluster at 127.0.0.1:9042
and then I try to connect to it in my go app (dockerized) using gocql
cluster := gocql.NewCluster("127.0.0.1")
session, err := cluster.CreateSession()
( also tried to add element as Consistency, ProtoVersion=4 etc. same results)
it says then :
Cannot connect to db: gocql: unable to create session: unable to discover protocol version: dial tcp 127.0.0.1:9042: connect: connection refused
Do you. have any idea why it can't connect?
thanks !

Each container has its own localhost (127.0.0.1) address - you need to connect to IP address of your machine (if you use bridge), or maybe better to connect by the name (cassandra)

If both containers using bridge network you need to specify the network name in both containers and in your app container the host will be cassandra (docker) container.
services:
cassandra:
image: cassandra:4.0
container_name: cassandra
ports:
- 9042:9042
volumes:
- ~/apps/cassandra:/var/lib/cassandra
networks:
- default
environment:
- CASSANDRA_CLUSTER_NAME=mycluster
myapp:
...
ports:
- 4242:4242
- 4243:4243
depends_on:
- cassandra
networks:
- default
environment:
- HOSTS=cassandra
...
networks:
default:
driver: bridge

Related

Facing error response from daemon-Windows

I am trying to run apache Kafka on windows using docker and my docker-compose.yml code is as follows:
version: "3"
services:
spark:
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
- "4010-4109:4010-4109"
volumes:
- ./notebooks:/home/jovyan/work/notebooks/
zookeeper:
image: 'bitnami/zookeeper:latest'
container_name: zookeeper
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
container_name: kakfa
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
When I execute the command
docker-compose -f docker-compose.yml up
I get an error: Error response from daemon: driver failed programming external connectivity on endpoint kafka-spark-1 (452eae1760b7860e3924c0e630943f825a809272760c8aa8bbb2f58ab2865377): Bind for 0.0.0.0:9092 failed: port is already allocated
I have tried net stop winnat and net start winnat, unfortunately this solution didn't work.
Would appreciate any kind of help!
Spark isn't running Kafka
Remove the ports here
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
Also, change variable for Kafka to use the proper hostname, otherwise Spark will not work with it...
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
Then you can also remove ports for Kafka container since you wouldn't have access from the host. Unless you add external listeners.
You may also be interested in an example notebook I use to test PySpark with Kafka.

I want to communicate my client and kafka broker with docker compose

There are client, kafka and zookeeper in the same network, I am trying to connect from client to kafka with SERVICE_NAME:PORT but
driver-service-container | 2022-07-24 09:00:05.076 WARN 1 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I get an error.
I know that I can easily communicate containers in the same network using the service name, but I don't understand why it doesn't work.
The name of my client trying to communicate with kafka is
driver-service
I looked through these resources but according to them my method should work:
Connect to Kafka running in Docker
My Python/Java/Spring/Go/Whatever Client Won’t Connect to My Apache
Kafka Cluster in Docker/AWS/My Brother’s Laptop. Please Help!
driver-service githup repositorie
My docker-compose file:
version: '3'
services:
gateway-server:
image: gateway-server-image
container_name: gateway-server-container
ports:
- '5555:5555'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- PASSENGER_SERVICE_URL=172.24.2.4:4444
- DRIVER_SERVICE_URL=172.24.2.5:3333
networks:
microservicesNetwork:
ipv4_address: 172.24.2.6
driver-service:
image: driver-service-image
container_name: driver-service-container
ports:
- '3333:3333'
environment:
- NOTIFICATION_SERVICE_URL=172.24.2.3:8888
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- KAFKA_GROUP_ID=driver-group-id
- KAFKA_BOOTSTRAP_SERVERS=broker:29092
- kafka.consumer.group.id=driver-group-id
- kafka.consumer.enable.auto.commit=true
- kafka.consumer.auto.commit.interval.ms=1000
- kafka.consumer.auto.offset.reset=earliest
- kafka.consumer.max.poll.records=1
networks:
microservicesNetwork:
ipv4_address: 172.24.2.5
passenger-service:
image: passenger-service-image
container_name: passenger-service-container
ports:
- '4444:4444'
environment:
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.4
notification-service:
image: notification-service-image
container_name: notification-service-container
ports:
- '8888:8888'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.3
payment-service:
image: payment-service-image
container_name: payment-service-container
ports:
- '7777:7777'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.2
zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- microservicesNetwork
broker:
image: confluentinc/cp-kafka:7.0.1
container_name: broker
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
GROUP_ID: driver-group-id
KAFKA_CREATE_TOPICS: "product"
networks:
- microservicesNetwork
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "8080:8080"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=broker
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=broker:29092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
- KAFKA_CLUSTERS_0_READONLY=true
networks:
- microservicesNetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
platform: linux/x86_64
environment:
- discovery.type=single-node
- max_open_files=65536
- max_content_length_in_bytes=100000000
- transport.host= elasticsearch
volumes:
- $HOME/app:/var/app
ports:
- "9200:9200"
- "9300:9300"
networks:
- microservicesNetwork
postgresql:
image: postgres:11.1-alpine
platform: linux/x86_64
container_name: postgresql
volumes:
- ./postgresql/:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=123456
- POSTGRES_USER=postgres
- POSTGRES_DB=cqrs_db
ports:
- "5432:5432"
networks:
- microservicesNetwork
networks:
microservicesNetwork:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.24.2.0/16
gateway: 172.24.2.1
application.prod.properties ->
#datasource
spring.datasource.url=jdbc:h2:mem:db_driver
spring.datasource.username=root
spring.datasource.password=1234
spring.datasource.driver-class-name=org.h2.Driver
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
#need spring-security config.
spring.h2.console.enabled=false
spring.h2.console.path=/h2-console
spring.jpa.show-sql=true
service.security.secure-key-username=${SECURE_KEY_USERNAME}
service.security.secure-key-password=${SECURE_KEY_PASSWORD}
payment.service.url=${PAYMENT_SERVICE_URL}
notification.service.url=${NOTIFICATION_SERVICE_URL}
#kafka configs
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}
kafka.group.id =${KAFKA_GROUP_ID}
spring.cache.cache-names=driver
spring.jackson.serialization.fail-on-empty-beans= false
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=11MB
If the error says localhost/127.0.0.1:9092, then your environment variable isn't being used.
In the startup logs from the container, look at AdminClientConfig or ConsumerConfig sections, and you'll see the real bootstrap address that's used
KAFKA_BOOTSTRAP_SERVERS=broker:29092 is correct based on your KAFKA_ADVERTISED_LISTENERS
But, in your properties, it's unclear how this is used without showing your config class
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS
If you read the spring kafka documentation closely, you'll see it needs to be spring.kafka.bootstrap.servers in order to be wired in automatically
Sidenote: All those kafka.consumer. attributes would need to be set as JVM properties, not container environment variables.
Also, Docker services should be configured to communicate with each other by service names, not assigned IP addresses
problem solved 😊
If I run driver-service on the local computer, it actually connects from localhost:9092, but if driver-service and kafka are in the same docker network, it needs to connect from "KAFKA_IP:29092" (service name can be used instead of KAFKA_IP), kafka is different for such different network environments. it expects us to configure (Source), when I ran my driver-service application on my local computer, kafka and driver-service could communicate, but they could not communicate in the same docker network. That is, the driver-service was not using the Kafka connection address that I defined in the application.prod.properties file that my application should use while running in docker. The problem was in my spring kafka integration, I was trying to give my client application the address to connect to kafka using the kafka.bootstrap.servers key in my properties file, I was defining this key in my properties file and pulling and assigning the value of this key in KafkaBean class, but the client did not see it.and it was persistently trying to connect to localhost:9092, first I specified my active profile in my dockerfile with the "ENTRYPOINT ["java", "-Dspring.profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" command to use my application.prod.properties file while working in docker environment and then, if we use the key "spring.kafka.bootstrap-servers" instead of "kafka.bootstrap.servers" as stated in the spring Kafka document(SOURCE), spring can automatically detect from which address it can connect to Kafka. I just had to give the producer also the Kafka address using the #Value annotation so that the driver-service and Kafka could communicate seamlessly in the docker network 😇
Thank you very much #OneCricketeer and #Svend for their help.

Docker-compose,My app can't connect to the database

Here is my docker-compose file
version: '3'
services:
mysql:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
container_name: mysql
hostname: mysqlServiceHost
network_mode: bridge
ports:
- "3306:3306"
restart: on-failure
volumes:
- ./mysql_data:/var/lib/mysqldocker
- ./my.cnf:/etc/mysql/conf.d/my.cnf
- ./mysql/init:/docker-entrypoint-initdb.d/
- ./shop.sql:/docker-entrypoint-initdb.d/shop.sql
environment:
- MYSQL_ROOT_PASSWORD=a123456
- MYSQL_DATABASE=shop
redis:
image: redis:3
container_name: redis
host: redis
hostname: redisServiceHost
network_mode: bridge
restart: on-failure
ports:
- "6379:6379"
golang:
build: .
restart: always
network_mode: bridge
ports:
- "8080:8080"
depends_on:
- mysql
- redis
links:
- mysql
- redis
volumes:
- /xiangmu/go/src:/go
tty: true
This is my go language code to connect mysql:
mysqladmin="root"
mysqlpwd="a123456"
mysqldb="shop"
DB, err = gorm.Open("mysql",mysqladmin+":"+mysqlpwd+"#tcp(mysqlServiceHost)/"+mysqldb+"?charset=utf8"+"&parseTime=True&loc=Local")
This is my go language code to connect redis:
config := map[string]string{
"key": beego.AppConfig.String("redisKey"),
"conn": "redisServiceHost:6379",
"dbNum": beego.AppConfig.String("redisDbNum"),
"password": beego.AppConfig.String("redisPwd"),
}
bytes, _ := json.Marshal(config)
redisClient, err = cache.NewCache("redis", string(bytes))
they have the same problem:
dial tcp: lookup redisServiceHost on 100.100.2.136:53: no such host
dial tcp: lookup mysqlServiceHost on 100.100.2.136:53: no such host
I have successfully connected to redis once, that time I used the IP address of the redis container, but after starting docker-compose again, I can't connect to it. It seems to be a host problem. I tried many methods to no avail. .
All services in the same docker-compose file will join the same network, each container can look up service name(redis and mysql in your example) to get back the appropriate container’s IP address.
So you can use the service name, try change to:
DB, err = gorm.Open("mysql",mysqladmin+":"+mysqlpwd+"#tcp(mysql)/"+mysqldb+"?charset=utf8"+"&parseTime=True&loc=Local")
"conn": "redis:6379",
For more details, please check https://docs.docker.com/compose/networking/

Can't connect to Docker sql for Windows by hostname

I have next docker compose file (part of it)
version: '3.7'
services:
# DB Server ==========================================================================================================
mssqlsimple:
image: microsoft/mssql-server-windows-developer:2017-latest
volumes:
- ".\\Prm.DbContext.Application\\FullInit:C:\\data"
container_name: pbpmssqlsimple
ports:
#- "1403:1433"
- target: 1433
published: 1403
protocol: tcp
mode: host
networks:
- backend
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: "SP_116b626d-ed7e-4f5d123#"
...
after command docker-compose up i have instance of sql server and can to connect to it by IP (172.21.69.132) or alias id (0338726df5ba) from docker config.
but i can't connect by host name mssqlsimple (or pbpmssqlsimple)
fragment config json
I tried to do it, but failed
disable windows firewall
connect with port mssqlsimple, 1403
used simple syntax for ports "1403:1433"
Tell me please how to solve my problem

Docker Container Connection Refused MacOS

I have this docker-compose file:
networks:
default:
ipam:
config:
- subnet: 10.48.0.0/16
gateway: 10.48.0.1
services:
haproxy:
build: haproxy
container_name: haproxy
volumes:
- ./haproxy/conf/:/usr/local/etc/haproxy/
- ./haproxy/ssl/:/etc/ssl/xip.io/
ports:
- "80:80"
- "443:443"
networks:
default:
ipv4_address: 10.48.0.2
server:
build: server
container_name: server
restart: always
environment:
- ENV=env=production db=true
ports:
- "8081:8081"
volumes:
- ./server/config:/usr/src/app/config
depends_on:
- haproxy
networks:
default:
ipv4_address: 10.48.0.4
frontend:
build: frontend
container_name: frontend
restart: always
ports:
- "8080:8080"
volumes:
- ./frontend/config:/usr/src/app/config
depends_on:
- server
networks:
default:
ipv4_address: 10.48.0.5
version: '2'
In order to deploy a backend server and a frontend interface inside a subnet defined in the range 10.48.0.0/16.
So I tried to assign fixed ip to each container. On Linux everything is ok, so I can reach 10.48.0.4_8081/api, but on MacOS when I try to do the same thing, I have ERR_CONNECTION_REFUSED.
If I try to connect without using IP, but with localhost:8081/api, this is ok. But with multiple containers, I have to access directly with the IP.
Inside each container, if I try to ping the other ip address (example from container frontend with IP 10.48.0.5 I try to ping 10.48.0.4) everything is OK.
So my question is, How can I do in order make an http call to an api that is on another service? thanks for your help.
I've read everywhere that is a well know situation under windows and mac, but not on linux, where is possible from the client side making request directly on the ip address of the container. This is not possible on mac and is still open an issue on github.
In this case, I've used haproxy in order to proxy requests to each container.

Resources