I have a Spring project and use an application.yml file to run configure it. When using placeholders in the configuration file and run it as a built docker image it will not evaluate the placeholder, whereas this works fine when running the jar without docker. What could be wrong here?
org.postgresql.util.PSQLException: FATAL: password authentication failed for user "${POSTGRES_DB_USER}"
application.yml
server:
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain
context-path: /
port: 8085
logging:
config: config/log4j2-spring.xml
spring:
datasource:
url: jdbc:h2:./data/cdr
username: sa
password:
jpa:
hibernate:
ddl-auto: create-drop
security:
user:
password: secretpassword
---
spring:
profiles: docker
datasource:
url: jdbc:postgresql://postgres/databasename
username: ${POSTGRES_DB_USER}
password: ${POSTGRES_DB_PASS}
docker-compose.yml
version: '2'
services:
application:
image: domain.com:3000/application:0-SNAPSHOT
volumes:
- application:/application/logs
ports:
- 8085:8085
links:
- postgres
environment:
PASSWORD: secretpassword
postgres:
image: sameersbn/postgresql:9.5-2
volumes:
- postgres-data:/var/lib/postgresql
environment:
DB_NAME: databasename
DB_USER: user
DB_PASS: secretpassword
volumes:
postgres-data:
driver: local
application-logs:
driver: local
I think you want this:
version: '2'
services:
application:
image: domain.com:3000/application:0-SNAPSHOT
volumes:
- application-logs:/application/logs
ports:
- 8085:8085
environment:
POSTGRES_DB_USER: user
POSTGRES_DB_PASS: secretpassword
postgres:
image: sameersbn/postgresql:9.5-2
volumes:
- postgres-data:/var/lib/postgresql
environment:
DB_NAME: databasename
DB_USER: user
DB_PASS: secretpassword
volumes:
postgres-data: {}
application-logs: {}
Related
I have created Microservices using Spring Boot and Eureka . I have used API Gateway for the microservices .
All the microservices ( eureka clients ) are visible on eureka server but giving an error like the below
api-Gateway port : 8999
product-service : 9001
product-detail-service : 9002
eureka-server : 8761
api-gatway application.properties
server.port =8999
spring.application.name = api-gateway
eureka.client.instance.preferIpAddress = true
eureka.client.serviceUrl.defaultZone= http://localhost:8761/eureka
spring.cloud.gateway.routes[0].id=product-service
spring.cloud.gateway.routes[0].uri=lb://product-service
spring.cloud.gateway.routes[0].predicates[0]=Path=/product/**
spring.cloud.gateway.routes[1].id=product-detail-service
spring.cloud.gateway.routes[1].uri=lb://product-detail-service
spring.cloud.gateway.routes[1].predicates[0]=Path=/productDetail/**
eureka-server application.properties
server.port=8761
eureka.client.register-with-eureka = false
eureka.server.waitTimeInMsWhenSyncEmpty = 0
product-detail-service application.properties
server.port=9002
spring.application.name = product-detail-service
eureka.instance.preferIpAddress = true
product-service application.properties
server.port = 9001
spring.application.name = product-service
eureka.client.instance.preferIpAddress = true
docker-compose.yml
version: '3.8'
services:
api-server:
build: ../apigateway
ports:
- 8999:8999
environment:
- eureka.client.service-url.defaultZone=http://eureka-server:8761/eureka
depends_on:
- product-service
- product-detail-service
eureka-server:
build: ../eureka_server
ports:
- 8761:8761
depends_on:
- product-service
- product-detail-service
product-service:
build: ../product_service
ports:
- 9001:9001
environment:
- eureka.client.service-url.defaultZone=http://eureka-server:8761/eureka
depends_on:
- product-detail-service
product-detail-service:
build: ../product_details_service
ports:
- 9002:9002
environment:
- eureka.client.service-url.defaultZone=http://eureka-server:8761/eureka
my docker images are created successfully and are running fine without docker-compose .
I have used networks and much more but still not resolved
Please help I am trying to solve the issue from 3 days
I have searched over stackoverflow & tried alot of solutions but with no result.
I have 2 containers one is for eureka server and another one for spring cloud gateway.
I'm running these 2 containers with docker-compose but spring-cloud-gateway didn't register to eureka server.
I'm using https. On localhost everything is working as expected but when I jumped to containers this is the problem of unregistered clients.
eureka.server.yml:
eureka:
client:
fetch-registry: false
register-with-eureka: false
instance:
secure-port-enabled: true
non-secure-port-enabled: false
server:
port: 8761
ssl:
enabled: true
key-alias: statement
key-store: classpath:statement-keystore.p12
key-store-password: secret
key-store-type: PKCS12
spring:
application:
name: netflix-eureka
spring.cloud.gateway.yml:
eureka:
instance:
nonSecurePortEnabled: false
securePortEnabled: true
securePort: 8765
prefer-ip-address: true
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: https://localhost:8761/eureka
level:
org:
springframework:
cloud:
gateway: DEBUG
reactor:
netty:
http:
client: DEBUG
server:
port: 8765
ssl:
enabled: true
key-alias: statement
key-store: classpath:statement-keystore.p12
key-store-password: secret
key-store-type: PKCS12
spring:
application:
name: api-gateway
cloud:
config:
enabled: false
gateway:
discovery:
locator:
enabled: true
httpclient:
wiretap: true
httpserver:
wiretap: true
globalcors:
corsConfigurations:
'[/**]':
allowedOrigins: "https://localhost:4200"
allowedHeaders: "*"
allowedMethods:
- GET
- POST
- PUT
- DELETE
spring.cloud.gateway.Dockerfile:
FROM openjdk:11-jre-slim
LABEL Description="SPring cloud gateway" Version="0.0.1"
ARG VERSION=0.0.1
VOLUME /tmp
ADD target/api-gateway-${VERSION}-SNAPSHOT.jar app.jar
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
eureka.server.Dockerfile:
FROM openjdk:11-jre-slim
LABEL Description="Eureka Server" Version="1.0"
ARG VERSION=1.0
VOLUME /tmp
ADD target/eureka-server-${VERSION}-SNAPSHOT.jar app.jar
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
docker-compose.yml:
version: '3.7'
services:
netflix-eureka:
image: abraham/netflix-eureka
container_name: "netflix-eureka"
ports:
- "8761:8761"
networks:
- statement-network
api-gateway:
image: abraham/api-gateway
container_name: "api-gateway"
ports:
- "8765:8765"
networks:
- statement-network
environment:
eureka.client.service-url.default-zone: https://netflix-eureka:8761/eureka
depends_on:
- netflix-eureka
links:
- netflix-eureka
networks:
abraham-network:
I wish to forward logs from remote EKS clusters to a centralised EKS cluster hosting ECK.
Versions in use:
EKS v1.20.7
Elasticsearch v7.7.0
Kibana v7.7.0
Filebeat v7.10.0
The setup is using a AWS NLB to forward requests to Nginx ingress, using host based routing.
When the DNS lookup (filebeat test output) for the Elasticsearch is tested on Filebeat, it validates the request.
But the logs for Filebeat are telling a different story.
2021-10-05T10:39:00.202Z ERROR [publisher_pipeline_output]
pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)):
Get "https://elasticsearch.dev.example.com:9200": Bad Request
The Filebeat agents can connect to the remote Elasticsearch via the NLB, when using a curl request.
The config is below. NB: dev.example.com is the remote cluster hosing ECK.
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: dev-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: "elasticsearch.dev.example.com"
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
headers:
Host: "elasticsearch.dev.example.com"
proxy_url: "https://example.elb.eu-west-2.amazonaws.com"
proxy_disable: false
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
Any tips or suggestions on how to enable Filebeat forwarding, would be much appreciated :-)
#1 Missing ports:
Even with the ports added in as suggested. Filebeat is erroring with:
2021-10-06T08:34:41.355Z ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)): Get "https://elasticsearch.dev.example.com:9200": Bad Request
...using a AWS NLB to forward requests to Nginx ingress, using host based routing
How about unset proxy_url and proxy_disable, then set hosts: ["<nlb url>:<nlb listener port>"]
The final working config:
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: qa-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: ["elasticsearch.dev.example.com:9200"]
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
In addition the following changes were needed:
NBL:
Add listener for 9200 forwarding to the Ingress Controller for HTTPS
SG:
Opened up port 9200 on the EKS worker nodes
I could not find any up to date and / or working documentation and / or example on how to set this up so what I did is researching and try and error. Yet I have not been able to get it running.
Im having a config-server with the following dependencies:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bus-amqp</artifactId>
<version>2.2.1.RELEASE</version>
</dependency>
The config-server has the following bootstrap.yml:
spring:
application:
name: config-server
rabbitmq:
host: ${RABBIT_MQ_HOST:localhost}
port: ${RABBIT_MQ_PORT:5672}
username: ${RABBIT_MQ_URSER_NAME:guest}
password: ${RABBIT_MQ_URSER_PASSWORD:guest}
And the following application.yml
spring:
cloud:
config:
server:
git:
uri: https://github.com/MyGit/config-repository.git
cloneOnStart: true
management:
endpoints:
web:
exposure:
include: "*"
server:
port: 8888
All my clients are having the following dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bus-amqp</artifactId>
<version>2.2.1.RELEASE</version>
</dependency>
Their bootstrap.yml for example looks like this:
spring:
application:
name: user-service
cloud:
config:
uri: http://${CONFIG_HOST:localhost}:${CONFIG_PORT:8888}
rabbitmq:
host: ${RABBIT_MQ_HOST:localhost}
port: ${RABBIT_MQ_PORT:5672}
username: ${RABBIT_MQ_URSER_NAME:guest}
password: ${RABBIT_MQ_URSER_PASSWORD:guest}
And their application.yml like this:
management:
endpoints:
web:
exposure:
include: "*"
server:
port: 8000
eureka:
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://${EUREKA_HOST:localhost}:${EUREKA_PORT:8761}/eureka
Im using springs #ConfigurationProperties, for example:
#Component
#ConfigurationProperties("user")
#Getter
#Setter
public class UserConfiguration {
private String role;
}
I'm starting all my services (including rabbitmq) using docker and docker-compose:
version: "3"
services:
rabbitmq:
image: bitnami/rabbitmq:latest
container_name: rabbitmq
environment:
RABBITMQ_USERNAME: admin
RABBITMQ_PASSWORD: admin
ports:
- "4369:4369"
- "5672:5672"
- "15672:15672"
- "25672:25672"
config-server:
build:
context: config-server
dockerfile: Dockerfile
container_name: config-server
depends_on:
- rabbitmq
environment:
RABBIT_MQ_HOST: "rabbitmq"
RABBIT_MQ_PORT: "5672"
RABBIT_MQ_URSER_NAME: "admin"
RABBIT_MQ_URSER_PASSWORD: "admin"
ports:
- "8888:8888"
eureka-server:
build:
context: eureka-server
dockerfile: Dockerfile
container_name: eureka-server
depends_on:
- rabbitmq
- config-server
environment:
RABBIT_MQ_HOST: "rabbitmq"
RABBIT_MQ_PORT: "5672"
RABBIT_MQ_URSER_NAME: "admin"
RABBIT_MQ_URSER_PASSWORD: "admin"
CONFIG_HOST: "config-server"
CONFIG_PORT: "8888"
ports:
- "8761:8761"
zuul-gateway:
build:
context: zuul-gateway
dockerfile: Dockerfile
container_name: zuul-gateway
depends_on:
- rabbitmq
- config-server
- eureka-server
environment:
RABBIT_MQ_HOST: "rabbitmq"
RABBIT_MQ_PORT: "5672"
RABBIT_MQ_URSER_NAME: "admin"
RABBIT_MQ_URSER_PASSWORD: "admin"
CONFIG_HOST: "config-server"
CONFIG_PORT: "8888"
EUREKA_HOST: "eureka-server"
EUREKA_PORT: "8761"
ports:
- "8765:8765"
user-service:
build:
context: user-service
dockerfile: Dockerfile
container_name: user-service
depends_on:
- rabbitmq
- config-server
- eureka-server
environment:
SPRING_PROFILES_ACTIVE: "development"
RABBIT_MQ_HOST: "rabbitmq"
RABBIT_MQ_PORT: "5672"
RABBIT_MQ_URSER_NAME: "admin"
RABBIT_MQ_URSER_PASSWORD: "admin"
CONFIG_HOST: "config-server"
CONFIG_PORT: "8888"
EUREKA_HOST: "eureka-server"
EUREKA_PORT: "8761"
ports:
- "8000:8000"
user-service-2:
build:
context: user-service
dockerfile: Dockerfile
container_name: user-service-2
depends_on:
- rabbitmq
- config-server
- eureka-server
environment:
SPRING_PROFILES_ACTIVE: "development"
RABBIT_MQ_HOST: "rabbitmq"
RABBIT_MQ_PORT: "5672"
RABBIT_MQ_URSER_NAME: "admin"
RABBIT_MQ_URSER_PASSWORD: "admin"
CONFIG_HOST: "config-server"
CONFIG_PORT: "8888"
EUREKA_HOST: "eureka-server"
EUREKA_PORT: "8761"
ports:
- "8001:8000"
user-service-3:
build:
context: user-service
dockerfile: Dockerfile
container_name: user-service-3
depends_on:
- rabbitmq
- config-server
- eureka-server
environment:
SPRING_PROFILES_ACTIVE: "development"
RABBIT_MQ_HOST: "rabbitmq"
RABBIT_MQ_PORT: "5672"
RABBIT_MQ_URSER_NAME: "admin"
RABBIT_MQ_URSER_PASSWORD: "admin"
CONFIG_HOST: "config-server"
CONFIG_PORT: "8888"
EUREKA_HOST: "eureka-server"
EUREKA_PORT: "8761"
ports:
- "8002:8000"
When all my services are started, at first everything seems to be fine. All services are registered with eureka, all services are callable on localhost and they can talk to each other.
But the config refresh is not working.
When I change something in my config repository and commit and push it it has no effect on my services. The configuration they have stays the same. When I manually call the bus refresh on my config-server:
http://localhost:8888/bus/refresh
with POST, like it is described in nearly every guide I could find so far, I get the response:
{
"timestamp": "2020-03-30T09:33:57.818+0000",
"status": 405,
"error": "Method Not Allowed",
"message": "Request method 'POST' not supported",
"path": "/bus/refresh"
}
When I use GET instead, I get:
{
"name": "bus",
"profiles": [
"refresh"
],
"label": null,
"version": "03759e798f3516da6a18fc8b61a265d37ddeff4e",
"state": null,
"propertySources": []
}
and it also has no effect on the confiuration of my services.
And when I call the bus refresh on any of my services I get:
{
"timestamp": "2020-03-30T09:35:39.643+0000",
"status": 404,
"error": "Not Found",
"message": "No message available",
"path": "/bus/refresh"
}
What do I need to do to get the auto configuration refresh working?
Manually calling the /actuator/refresh endpoints on the services works. They are then pulling the new config. So it seems like the rabbit mq simply is not working.
I found the solution.
Acutally my setup and configuration files are totally fine, I just called the wrong endpoint. Instead of calling:
http://localhost:8888/bus/refresh
I have to call:
http://localhost:8888/actuator/bus-refresh
This works on any service I started. It will automatically send a message to rabbitmq which will then publish the refresh to all consumers. All configurations are then updating.
Though I still dont know why bus/refresh is not working. It is used in many examples / tutorials.
I have tried to connect from a pod (jhipster) to a Google cloud SQL but I have not been successful.
My pod is left in CrashLoopBackOff because Cloud SQL can not connect Error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IPconnections.atorg.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:280)atorg.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)......ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [cl/databin/invoicing/folio/config/LiquibaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.DatabaseException: org.postgresql.util.PSQLException: Connection to localhost:5432 refused.
my folio-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: folio
namespace: jhipster
spec:
replicas: 2
selector:
matchLabels:
app: folio
version: "v1"
template:
metadata:
labels:
app: folio
version: "v1"
spec:
containers:
- name: folio-app
image: skilledboy/folio:v1
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JHIPSTER_SECURITY_AUTHENTICATION_JWT_BASE64_SECRET
valueFrom:
secretKeyRef:
name: jwt-secret
key: secret
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://localhost:5432/folio
- name: POSTGRES_DB_USER
value: user
- name: POSTGRES_DB_PASSWORD
value: password1
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=invo-project-233618:us-central1:folios=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-oauth-credential
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: "x-request-id,x-ot-span-context"
- name: JAVA_OPTS
value: " -Xmx256m -Xms256m"
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1"
ports:
- name: http
containerPort: 8081
readinessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 120
volumes:
- name: cloudsql-oauth-credential
secret:
secretName: cloudsql-oauth-credential
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
and in the configuration of my application-prod.yml
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:postgresql://127.0.0.1:5432/folio
username: ${POSTGRES_DB_USER}
password: ${POSTGRES_DB_PASSWORD}
What will I have wrong? someone to give me an idea that I can have bad? thanks
Your problem is that you are telling the Cloud SQL proxy to run with -credential_file=/secrets/cloudsql/credentials.json, but you haven't actually provided a file at /secrets/cloudsql/ for it to use. (The volume in your config is at /etc/ssl/certs).
It's also worth pointing out that the credential_file flag is for using a service account key, and token flag is used for an oauth token (it's unclear which you are trying to use)