Kubernetes: Tomcat throws Exception when enabling proxy protocol - spring-boot

I am kind of lost right now. I set up a Kubernetes Cluster, deployed a Spring Boot API and a LoadBalancer which worked fine. Now I want to enable proxy protocol on the LoadBalancer to preserve the real clients IP, but once I do this my Spring Boot API always returns with a 400 Bad Request and an IllegalArgumentException is thrown.
Here is the short stack trace (I masked the ip addresses):
2020-09-29 20:05:58.382 INFO 1 --- [nio-8080-exec-1] o.apache.coyote.http11.Http11Processor : Error parsing HTTP request header
Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Invalid character found in the HTTP protocol [255.255.255.253 255.255.255.254]
at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:560) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:260) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1589) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[na:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.37.jar!/:9.0.37]
at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]
I am using the hcloud-cloud-controller-manager from Hetzner.
Here is my LoadBalancer:
apiVersion: v1
kind: Service
metadata:
labels:
service: auth-service
name: auth-service-service
annotations:
load-balancer.hetzner.cloud/name: "lb-backend"
load-balancer.hetzner.cloud/health-check-port: "80"
load-balancer.hetzner.cloud/uses-proxyprotocol: "true"
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
service: auth-service
externalTrafficPolicy: Local
type: LoadBalancer
Here is my Spring Config:
spring:
datasource:
platform: postgres
url: ${DATABASE_CS}
username: ${DATABASE_USERNAME}
password: ${DATABASE_PASSWORD}
driver-class-name: org.postgresql.Driver
flyway:
schemas: authservice
jpa:
show-sql: false
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
jdbc:
lob:
non_contextual_creation: true
hibernate:
ddl-auto: validate
security:
jwt:
secret-key: ${JWT_SECRET_KEY}
expires: ${JWT_EXPIRES:300000}
mail:
from: ${MAIL_FROM}
fromName: ${MAIL_FROM_NAME}
smtp:
host: ${SMTP_HOST}
username: ${SMTP_USERNAME}
password: ${SMTP_PASSWORD}
port: ${SMTP_PORT:25}
mjml:
app-id: ${MJML_APP_ID}
app-secret: ${MJML_SECRET_KEY}
stripe:
keys:
secret: ${STRIPE_SECRET_KEY}
public: ${STRIPE_PUBLIC_KEY}
server:
forward-headers-strategy: native
As you may have noticed I already tried to enable the forward-headers based on this issue.
Thanks for your help!

You enabled the PROXY protocol, which is not HTTP, but a different protocol to tunnel TCP connections to downstream servers, keeping as much information as possible.
https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt
I am quite sure you want to disable this
load-balancer.hetzner.cloud/uses-proxyprotocol: "true"
and instead rely on the forward headers to the the remote-address to the correct value for the client.
To be honest I am not aware that tomcat supports the PROXY protocol. (Edit: Currently is does not, see https://bz.apache.org/bugzilla/show_bug.cgi?id=57830)

Related

Name or service unknow with eureka server

I use spring boot with eureka.
When I try to start the eureka server, I get
Caused by: java.net.UnknownHostException: MiWiFi-RA72-srv: Name or
service not known
My config
server:
port: 8761
spring:
security:
user:
name: admin
password: password
logging:
level:
org:
springframework:
security: DEBUG
eureka:
client:
registerWithEureka: false
fetchRegistry: false
service-url:
defaultZone: http://admin:password#localhost:8761/eureka/
server:
waitTimeInMsWhenSyncEmpty: 0
instance:
hostname: localhost
prefer-ip-address: true
I tried to replace localhost by MiWiFi-RA72-srv, but I get the same result

spring-cloud-stream-binder-kafka - Unable to create multiple kafka binders with ssl configuration

I am trying to connect to a kafka cluster through SASL_SSL protocol with jaas config as follows:
spring:
cloud:
stream:
bindings:
binding-1:
binder: kafka-1-with-ssl
destination: <destination-1>
content-type: text/plain
group: <group-id-1>
consumer:
header-mode: headers
binding-2:
binder: kafka-2-with-ssl
destination: <destination-2>
content-type: text/plain
group: <group-id-2>
consumer:
header-mode: headers
binders:
kafka-1-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-1>
configuration:
ssl:
truststore:
location: <location-1>
password: <ts-password-1>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-1>
password: <password-1>
kafka-2-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-2>
configuration:
ssl:
truststore:
location: <location-2>
password: <ts-password-2>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-2>
password: <password-2>
kafka:
binder:
configuration:
security:
protocol: SASL_SSL
sasl:
mechanism: SCRAM-SHA-256
The above configuration is inline with the sample config available on the spring-cloud-stream's official git repo.
similar issue raised on the library's git repo says it's fixed in latest versions but doesn't seem so. Getting the following error:
springBootVersion: 2.2.8 and
spring-cloud-stream-dependencies version - Horsham.SR6.
Failed to create consumer binding; retrying in 30 seconds | org.springframework.cloud.stream.binder.BinderException: Exception thrown while starting consumer:
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:461)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:90)
at org.springframework.cloud.stream.binder.AbstractBinder.bindConsumer(AbstractBinder.java:143)
at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleConsumerBinding$1(BindingService.java:201)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:68)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:407)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:65)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createAdminClient(KafkaTopicProvisioner.java:246)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.doProvisionConsumerDestination(KafkaTopicProvisioner.java:216)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionConsumerDestination(KafkaTopicProvisioner.java:183)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionConsumerDestination(KafkaTopicProvisioner.java:79)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:402)
... 12 common frames omitted
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: KrbException: Cannot locate default realm
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:160)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:67)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:99)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:382)
... 18 common frames omitted
Caused by: javax.security.auth.login.LoginException: KrbException: Cannot locate default realm
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:60)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:61)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:111)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:149)
... 22 common frames omitted
Caused by: sun.security.krb5.RealmException: KrbException: Cannot locate default realm
at sun.security.krb5.Realm.getDefault(Realm.java:68)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:462)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:471)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:706)
... 38 common frames omitted
Caused by: sun.security.krb5.KrbException: Cannot locate default realm
at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
at sun.security.krb5.Realm.getDefault(Realm.java:64)
... 41 common frames omitted
Which makes me think that the library is not picking up the config props properly because jaas.loginModule is specified as ScramLoginModule but it's using Krb5LoginModule to authenticate.
But well, it's striking to find that when the configuration is done as follows (the difference lies in the last part with ssl credentials outside binder's environment), it connects to the binder which is specified in the global ssl props(outside the binder's env) and silently ignores the other binder without showing any error logs.
Say if password credentials of the binder kafka-2-with-ssl were specified in the global ssl props, that binder is created and the bindings subscribed to that binder start consuming the events. But this is useful only when we need to create single binder.
spring:
cloud:
stream:
bindings:
binding-1:
binder: kafka-1-with-ssl
destination: <destination-1>
content-type: text/plain
group: <group-id-1>
consumer:
header-mode: headers
binding-2:
binder: kafka-2-with-ssl
destination: <destination-2>
content-type: text/plain
group: <group-id-2>
consumer:
header-mode: headers
binders:
kafka-1-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-1>
configuration:
ssl:
truststore:
location: <location-1>
password: <ts-password-1>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-1>
password: <password-1>
kafka-2-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-2>
configuration:
ssl:
truststore:
location: <location-2>
password: <ts-password-2>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-2>
password: <password-2>
kafka:
binder:
configuration:
security:
protocol: SASL_SSL
sasl:
mechanism: SCRAM-SHA-256
ssl:
truststore:
location: <location-2>
password: <ts-password-2>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-2>
password: <password-2>
Assure you that nothing is wrong with the ssl credentials. Tested diligently with either of the ssl-kafka-binder successfully getting created individually. The aim is to connect to multiple kafka binders with SASL_SSL protocol. Thanks in advance.
I think you may want to follow the solutions implemented in KIP-85 for this issue. Instead of using the Spring Cloud Stream Kafka binder provided JAAS configuration or setting the java.security.auth.login.config property, use the sasl.jaas.config property which takes precedence over other methods. By using sasl.jaas.config, you can override the restriction placed by JVM in which a JVM-wide static security context is used, thus ignoring any subsequent JAAS configurations found after the first one.
Here is a sample application that demonstrates how to connect to multiple Kafka clusters with different security contexts as a multi-binder application.

virtualHost Not Creating on RabbitMQ server using application.yml springboot and from config server

Virtualhost are not getting created on RabbitMQ server based on configuration
Do i have make sure VH aka Virtual Hosts on RabbitMQ .
Am i missing some configuration.
Please find the configuration below
application.yml
spring:
rabbitmq:
host: 127.0.0.1
virtual-host: /defaultVH
username: defaultUser
password: defaultPassword
cloud:
stream:
bindings:
saviyntSampleQueueA:
binder: rabbit-A
contentType: application/x-java-object
group: groupA
destination: saviyntSampleQueueA
saviyntSampleQueueB:
binder: rabbit-B
contentType: application/x-java-object
group: groupB
destination: saviyntSampleQueueB
binders:
rabbit-A:
defaultCandidate: false
inheritEnvironment: false
type: rabbit
environment:
spring:
rabbitmq:
host: 127.0.0.1
virtualHost: /vhA
username: userA
password: paswdA
port: 5672
connection-timeout: 10000
rabbit-B:
defaultCandidate: false
inheritEnvironment: false
type: rabbit
environment:
spring:
rabbitmq:
host: 127.0.0.1
virtualHost: /vhB
username: userB
password: paswdB
port: 5672
connection-timeout: 10000
bootstrap.yml
############################################
# default settings
############################################
spring:
main:
banner-mode: "off"
application:
name: demo-service
cloud:
config:
enabled: true #change this to use config-service
retry:
maxAttempts: 3
discovery:
enabled: false
fail-fast: true
override-system-properties: false
server:
port: 8080
Added default spring boot added Enable binding
#EnableBinding({MessageChannels.class})
#SpringBootApplication
public class Configissue1124Application {
public static void main(String[] args) {
SpringApplication.run(Configissue1124Application.class, args);
}
}
Now simple straightforward massage channel to to dispatch massage
interface MessageChannels {
#Input("saviyntSampleQueueA")
SubscribableChannel queueA();
#Input("saviyntSampleQueueB")
SubscribableChannel queueB();
}
When i ran the Boot Application its not creating any Virtualhost on the system . i tried using config server buy providing same configuration but still no luck
can you please find if some thing am missing.
Thanks in Advance
The AMQP protocol (or RabbitMQ REST API) provides no mechanism to provision virtual hosts from the client.
Virtual hosts must be provisioned manually on the server.

Spring Cloud Consul health check and status configuration on GCP

We are trying to register a spring-cloud-consul application with consul on GCP compute engine, where it is able to register the application with the consul, but there are two problems we are facing with the application. Below is bootstrap.yaml and server.yaml for the application,
application.yaml
server:
port: 10003
spring:
application:
name: hello-service
cloud:
consul:
enabled: true
inetutils:
useOnlySiteLocalInterfaces: true
endpoints:
actuator:
sensitive: false
bootstrap.yaml
spring:
cloud:
consul:
enabled: true
host: 10.160.0.18
port: 8500
discovery:
prefer-ip-address: true
Consul is not able to call health-check on compute engine, possibly because it's registered on the internal domain name of the instance.
service with consul: NewService{id='hello-service-10003',
name='hello-service', tags=[secure=false],
address='consul-app-test.c.name-of-project.internal', meta=null,
port=10003, enableTagOverride=null, check=Check{script='null',
interval='10s', ttl='null',
http='http://consul-app-test.c.name-of-project.internal:10003/actuator/health',
method='null', header={}, tcp='null', timeout='null',
deregisterCriticalServiceAfter='null', tlsSkipVerify=null,
status='null'}, checks=null}
Application, not de-registering from the consul. We have stopped the application still it shows on consul UI.
I made a few changes in the application.yaml and bootstrap.yaml which worked for me.
application.yaml
spring:
application:
name: hello-service
cloud:
consul:
discovery:
instanceId: ${spring.application.name}:${random.value}
health-check-critical-timeout: 3m
prefer-ip-address: true # disable if we want to use google cloud internal DNS
bootstrap.yaml
spring:
cloud:
consul:
enabled: true
host: 10.160.0.18
port: 8500
if you use version 2.1.2 like me: org.springframework.cloud:spring-cloud-starter-consul-discovery:2.1.2.RELEASE
you can set:
spring:
cloud:
consul:
host: localhost # consul的地址
port: 8500 # consul 的端口
discovery:
prefer-ip-address: true # // This must be matched
tags: version=1.0
instance-id: ${spring.application.name}:${spring.cloud.client.ip-address}
healthCheckPath: /actuator/health # 服务做健康检查的端口
healthCheckInterval: 15s # 服务健康检查的周期
healthCheckTimeout: 60s # 服务检查是的timeout时长
healthCheckCriticalTimeout: 5m # 服务健康检查失败5分钟后,删除服务
and you can look the source code in ConsulDiscoveryProperties:
#ConfigurationProperties("spring.cloud.consul.discovery")
public class ConsulDiscoveryProperties {
……
/** Is service discovery enabled? */
private boolean enabled = true;
/** Alternate server path to invoke for health checking. */
private String healthCheckPath = "/actuator/health";
/** Custom health check url to override default. */
private String healthCheckUrl;
/** How often to perform the health check (e.g. 10s), defaults to 10s. */
private String healthCheckInterval = "10s";
/** Timeout for health check (e.g. 10s). */
private String healthCheckTimeout;
/**
* Timeout to deregister services critical for longer than timeout (e.g. 30m).
* Requires consul version 7.x or higher.
*/
private String healthCheckCriticalTimeout;
……

spring config server cannot refresh by webhook

I use spring cloud bus and rabbitMQ to refresh spring config. When we update the config and push to the github, the webhook cannot refresh, the response message:
"{"timestamp":"2018-05-14T09:44:48.230+0000","status":400,"error":"Bad
Request","message":"JSON parse error: Cannot deserialize instance of
java.lang.String out of START_ARRAY token; nested exception is
com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot
deserialize instance of java.lang.String out of START_ARRAY token\n
at [Source: (PushbackInputStream); line: 1, column: 290] (through
reference chain:
java.util.LinkedHashMap[\"commits\"])","path":"/actuator/bus-refresh"}
";
But when we refresh through postman or Using the command "curl -X POST http://436d3d0b.ngrok.io/actuator/bus-refresh" ,it can refresh normally.
The application.yml as shown below:
spring:
application:
name: config-server
cloud:
config:
server:
git:
search-paths: config/*
username:
password:
uri: "github url"
label: master
bus:
trace:
enabled: true
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
management:
endpoints:
web:
exposure:
include: bus-refresh
Github webhook payload url is "http://436d3d0b.ngrok.io/actuator/bus-refresh" and content type is "application/json";

Resources