Is it possible to create a multi-binder binding with Spring-Cloud-Streams Kafka-Streams to stream from a cluster A and produce to cluster B - spring

I want to create a Kafka-Streams application with Spring-Cloud-Streams which integrates 2 different Kafka Clusters / setups. I tried to implement it using multi-binder configurations as mentioned in the documentation and similar to the examples here:
https://github.com/spring-cloud/spring-cloud-stream-samples/tree/main/multi-binder-samples
Given a simple function like this:
#Bean
public Function<KStream<String, AnalyticsEvent>, KStream<String, UpdateEvent>> analyticsEventProcessor() {
return input -> input
.filter(new AnalyticsPredicate())
.map(new AnalyticsToUpdateEventMapper());
}
In the configuration i'm trying to bind these to different binders.
spring.cloud:
stream:
bindings:
analyticsEventProcessor-in-0:
destination: analytics-events
binder: cluster1-kstream
analyticsEventProcessor-out-0:
destination: update-events
binder: cluster2-kstream
binders:
cluster1-kstream:
type: kstream
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <url cluster1>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster1>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
streams:
binder:
brokers: <url cluster1>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster1>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
cluster2-kstream:
type: kstream
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <url cluster2>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster2>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
streams:
binder:
brokers: <url cluster2>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster2>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
I tried first to run the application completely in a single cluster which worked well. When i run this i always get an error:
2022-08-10 15:28:42.892 WARN 1 --- [-StreamThread-2] org.apache.kafka.clients.NetworkClient : [Consumer clientId=<clientid>-StreamThread-2-consumer, groupId=<group-id>] Error while fetching metadata with correlation id 2 : {analytics-events=TOPIC_AUTHORIZATION_FAILED}
2022-08-10 15:28:42.893 ERROR 1 --- [-StreamThread-2] org.apache.kafka.clients.Metadata : [Consumer clientId=<client-id>, groupId=<group-id>] Topic authorization failed for topics [analytics-events]
2022-08-10 15:28:42.893 INFO 1 --- [-StreamThread-2] org.apache.kafka.clients.Metadata : [Consumer clientId=<client-id>, groupId=<group-id>] Cluster ID: <cluster-id>
2022-08-10 15:28:42.893 ERROR 1 --- [-StreamThread-2] c.s.a.a.e.UncaughtExceptionHandler : org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [analytics-events]
2022-08-10 15:28:42.893 ERROR 1 --- [-StreamThread-2] org.apache.kafka.streams.KafkaStreams : stream-client [<client-id>] Replacing thread in the streams uncaught exception handler
org.apache.kafka.streams.errors.StreamsException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [analytics-events]
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:642) ~[kafka-streams-3.1.1.jar!/:na]
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576) ~[kafka-streams-3.1.1.jar!/:na]
Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [analytics-events]
I verified the kafka-client certificates they should be correct. I looked at them with keytool, also the password env is set correctly. The consumerConfig also uses the correct broker URL.
Is it possible to use within a KStream Function different kafka clusters with multi-binder for the input for a stream and for the output, is this possible or does it only work with type kafka binders?

In Kafka Streams, you cannot connect to two different clusters in a single application. This means that you cannot receive from a cluster on the inbound and write to another cluster on the outbound when using a Spring Cloud Stream function. See this SO [thread][1] for more details.
You can probably receive from and write to the same cluster in your Kafka Streams function as a workaround. Then, using a regular Kafka binder-based function, simply bridge the output topic to the second cluster. In regular functions (non-Kafka Streams), it can consume from and publish to multiple clusters.
#Bean
public Function<KStream<String, AnalyticsEvent>, KStream<String, UpdateEvent>> analyticsEventProcessor() {
return input -> input
.filter(new AnalyticsPredicate())
.map(new AnalyticsToUpdateEventMapper());
}
This function needs to receive and write to the same cluster. Then you can have another function as below.
#Bean
public Function<?, ?> bridgeFunction() {
....
}
For this function, input is cluster-1 and output is cluster-2.
When using this workaround, make sure to include the regular Kafka binder also as a dependency - spring-cloud-stream-binder-kafka.
Keep in mind that there are disadvantages to this approach, such as adding an extra topic overhead, latency from that etc. However, this is a potential workaround for this use case. For more options, see the SO thread, I mentioned above.
[1]: https://stackoverflow.com/questions/45847690/how-to-connect-to-multiple-clusters-in-a-single-kafka-streams-application

Related

Kafka starts throwing replication factor error as soon as spring boot app connects to it

Kafka starts flooding logs with the the error
INFO [Admin Manager on Broker 2]: Error processing create topic request CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=3, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')]) (kafka.server.ZkAdminManager)
org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
As soon as the spring boot application connects. Here's the Kafka config I pass via application yaml
spring:
cloud:
stream:
instanceIndex: 0
kafka:
default:
producer:
topic:
replication-factor: 1
configuration:
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
spring:
json:
add:
type:
headers: 'false'
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
max:
block:
ms: '100'
consumer:
topic:
replication-factor: 1
configuration:
key:
deserializer: org.apache.kafka.common.serialization.StringDeserializer
spring:
json:
trusted:
packages: com.message
value:
deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
binder:
autoCreateTopics: 'false'
brokers: localhost:19092
replicationFactor: 1
bindings:
consume-in-0:
group: ${category}-${spring.application.name}-consumer-${runtime-env}
destination: alm-tom-${runtime-env}
publish-out-0:
destination: ${category}-${spring.application.name}-${runtime-env}
I don't see any other config to control consumer-offsets topic replication factor
The error is coming from the broker itself. It will only start logging this when you create your first consumer group...
Check the server.properties file for offsets.topic.replication.factor.
Similarly, the transactions topic will have its own replication factor that needs reduced (assuming you're using a newer Kafka version)

spring cloud stream kafka preventing specific exception from attempt and adding to dlq

I have a sample of cloud stream Kafka with this configuration :
spring:
main:
allow-circular-references: true
cloud:
stream:
bindings:
news-in-0:
consumer:
max-attempts: 5
concurrency: 25
back-off-multiplier: 4.0
back-off-max-interval: 12000
retryableExceptions:
com.mycompany.kafka.DuplicateReverseException: false
com.mycompany.kafka.ApplicationException: true
content-type: application/json
group: sample
destination: newsQueue
newsdlq-in-0:
consumer:
max-attempts: 2
concurrency: 25
back-off-initial-interval: 15000 #360000
content-type: application/json
group: sample
destination: newsQueueDlq
kafka:
binder:
auto-create-topics: true
brokers: localhost:9092
min-partition-count: 25
bindings:
news-in-0:
consumer:
ackEachRecord: true
enableDlq: true
dlqName: newsQueueDlq
autoCommitOnError: true
function:
definition: news;newsdlq
kafka:
consumer:
auto-offset-reset: earliest
listener:
poll-timeout: 2000
idle-event-interval: 30000
application:
name: consumer-cloud-stream
I'm using spring-cloud.version -> 2021.0.3.
In our business, we need to ignore all business exceptions like DuplicateReverseException for the next attempt and don't put it to dlq anymore.
this configuration cover first our requirement but I can't prevent adding this message in dlq.
is this ok that I catch this exception and ignore it? Has spring provided a better way for it?
#Bean
public Consumer<Message<News>> news() {
return message -> {
if (message.getPayload().getSource().equals("1")) {
log.error("1");
throw new DuplicateReverseException();
} else {
log.error("2");
throw new ApplicationException();
}
};
}

How to deal with docker deployed spring cloud application mutual access through zuul?

I deployed my spring cloud application in docker,include eureka server,zuul,eureka client. I want to access eureka client via zuul.
Zuul and eureka client are registered at eureka server.I access each application ,it is work. When I access eureka client via zuul, zuul console infomation show java.net.NoRouteToHostException. I don't know why and how to deal with this problem.
Eureka server config is like this.
server:
port: 1020
spring:
application:
name: eureka-server
security:
basic:
enabled: true
user:
name: admin
password: admin
eureka:
client:
fetch-registry: true
register-with-eureka: true
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
instance:
hostname: 192.168.90.183
prefer-ip-address: true
ip-address: 192.168.90.183
server:
enable-self-preservation: false
eviction-interval-timer-in-ms: 5000
management:
endpoints:
web:
exposure:
include: "*"
endpoint:
shutdown:
enabled: true
Zuul config is like this.
server:
port: 8088
spring:
application:
name: gateway
security:
oauth2:
management:
security:
enabled: false
endpoints:
web:
exposure:
exclude: refresh,health,info
ribbon:
ReadTimeout: 20000
SocketTimeout: 20000
zuul:
# sensitiveHeaders: "*"
routes:
tdcm-linyi:
path: /371300/**
serviceId: tdcm
ratelimit:
key-prefix: your-prefix
enabled: true
behind-proxy: true
default-policy:
limit: 100
quota: 1000
refresh-interval: 60
type:
- user
- origin
- url
host:
connect-timeout-millis: 20000
socket-timeout-millis: 20000
#================================eureka setting==============================
eureka:
instance:
instance-id: ${eureka.instance.hostname}:${server.port}
hostname: 192.168.90.183
prefer-ip-address: true
ip-address: 192.168.90.183
lease-expiration-duration-in-seconds: 10
lease-renewal-interval-in-seconds: 5
client:
serviceUrl:
defaultZone: http://admin:admin#${EUREKA_HOST:192.168.90.183}:${EUREKA_PORT:1020}/eureka
fetch-registry: true
register-with-eureka: true
Eureka client config is like this.
spring:
application:
name: tdcm
banner:
charset: UTF-8
http:
encoding:
charset: UTF-8
enabled: true
force: true
messages:
encoding: UTF-8
mvc:
throw-exception-if-no-handler-found: true
# Server
server:
port: 8926
tomcat:
uri-encoding: UTF-8
#================================eureka settinig==============================
eureka:
instance:
instance-id: ${eureka.instance.hostname}:${server.port}
hostname: 192.168.90.183
prefer-ip-address: true
ip-address: 192.168.90.183
lease-expiration-duration-in-seconds: 10
lease-renewal-interval-in-seconds: 5
client:
serviceUrl:
defaultZone: http://admin:admin#${EUREKA_HOST:192.168.90.183}:${EUREKA_PORT:1020}/eureka
fetch-registry: true
register-with-eureka: true
My test operate is like this.
I access the zuul by http://192.168.90.183:8088 ,it works well.
I access the eureka client by http://192.168.90.183:8926/getCityCenter , it works well.
When I access the eureka client via zuul by
http://192.168.90.183:8088/371300/getCityCenter , it doesn't work.
The console show the information like this.
03-29 01:55:27.229 INFO [c.n.loadbalancer.DynamicServerListLoadBalancer] - DynamicServerListLoadBalancer for client tdcm initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=tdcm,current list of Servers=[192.168.90.183:8926],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
},Server stats: [[Server:192.168.90.183:8926; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 00:00:00 UTC 1970; First connection made: Thu Jan 01 00:00:00 UTC 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0]
]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#3275110f
03-29 01:55:28.201 INFO [com.netflix.config.ChainedDynamicProperty] - Flipping property: tdcm.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
03-29 01:55:28.545 INFO [org.apache.http.impl.execchain.RetryExec] - I/O exception (java.net.NoRouteToHostException) caught when processing request to {}->http://192.168.90.183:8926: No route to host (Host unreachable)
03-29 01:55:28.546 INFO [org.apache.http.impl.execchain.RetryExec] - I/O exception (java.net.NoRouteToHostException) caught when processing request to {}->http://192.168.90.183:8926: No route to host (Host unreachable)
03-29 01:55:28.546 INFO [org.apache.http.impl.execchain.RetryExec] - Retrying request to {}->http://192.168.90.183:8926
03-29 01:55:28.546 INFO [org.apache.http.impl.execchain.RetryExec] - Retrying request to {}->http://192.168.90.183:8926
03-29 01:55:28.547 INFO [org.apache.http.impl.execchain.RetryExec] - I/O exception (java.net.NoRouteToHostException) caught when processing request to {}->http://192.168.90.183:8926: No route to host (Host unreachable)
03-29 01:55:28.548 INFO [org.apache.http.impl.execchain.RetryExec] - Retrying request to {}->http://192.168.90.183:8926
03-29 01:55:28.555 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:28.556 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:29.549 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:29.550 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:29.550 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:29.551 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:29.549 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:29.552 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:37.508 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:37.510 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
03-29 01:55:39.031 ERROR [c.t.gateway.component.exception.ProducerFallback] - s:tdcm
03-29 01:55:39.033 ERROR [c.t.gateway.component.exception.ProducerFallback] - exception: null
It seems the zuul can't find the router to eureka client of tdcm.
I tried to deployed all application on computer,include eureka server,zuul,eureka client,not in docker. The same config as this article descript,it works well. I don't know why it isn't work when access the eureka client via zuul in docker deployed.
I use the host computer IP address for spring cloud appliction.
My docker version is 17.12.1-ce.
My spring cloud version is Finchley.SR1.
My Spring boot version is 2.0.3.RELEASE.
My host computer is cent-os 7.
How can I deal with the problem?
I know the problem how to dealing.Eureka client config delete the yml value of ip-address.
eureka:
instance:
ip-address: 192.168.90.183
The reason is eureka client config in the inner network of docker.It can access from zuul through inner network of docker.

Spring Cloud Gateway+Consul configurations

We are using Spring Cloud Gateway before multiple microservices with consul as service discovery. There are several microservices developed in different languages.
Please find build.gradle for the application
buildscript {
ext {
springBootVersion = '2.1.2.RELEASE'
}
repositories {
mavenCentral()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
}
}
apply plugin: 'java'
apply plugin: 'org.springframework.boot'
apply plugin: 'io.spring.dependency-management'
group = 'com.demo'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '1.8'
repositories {
mavenCentral()
maven { url 'https://repo.spring.io/milestone' }
}
ext {
set('springCloudVersion', 'Greenwich.RELEASE')
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.cloud:spring-cloud-starter-consul-config'
implementation 'org.springframework.cloud:spring-cloud-starter-consul-discovery'
implementation 'org.springframework.cloud:spring-cloud-starter-gateway'
implementation 'org.springframework.boot:spring-boot-starter-security'
// https://mvnrepository.com/artifact/io.netty/netty-tcnative-boringssl-static
compile group: 'io.netty', name: 'netty-tcnative-boringssl-static', version: '2.0.20.Final'
runtimeOnly 'org.springframework.boot:spring-boot-devtools'
compileOnly 'org.projectlombok:lombok'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
Below is the example of API gateway configuration
application.yaml
server:
port: 10000
http:
port: 9000
# enable HTTP2
http2:
enabled: true
# enable compression
compression:
enabled: true
mime-types: text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json
ssl:
enabled: true
key-store: /var/.conf/self-signed.p12
key-store-type: PKCS12
key-store-password: "something"
key-alias: athenasowl
trust-store: /var/.conf/self-signe.p12
trust-store-password: "something"
spring:
application:
name: api-gateway
cloud:
gateway:
discovery:
locator:
enabled: true
predicates:
- Path="'/api/' + serviceId + '/**'"
filters:
- RewritePath="'/api/' + serviceId + '/(?<remaining>.*)'", "serviceId + '/${remaining}'"
management:
security:
enabled: false
server:
port: 10001
ssl:
enabled: false
endpoint:
gateway:
enabled: true
endpoints:
web:
exposure:
include: "*"
health:
sensitive: false
logging:
level:
root: DEBUG
org:
springframework:
web: INFO
pattern:
console: "%-5level %d{dd-MM-yyyy HH:mm:ss,SSS} [%F:%L] VTC : %msg%n"
file: "%-5level %d{dd-MM-yyyy HH:mm:ss,SSS} [%F:%L] VTC : %msg%n"
file: /tmp/log_files/apigateway.log
security:
basic:
enabled: false
There are a few configuration issues which we are facing, they are listed below:
Rewrite URL prefixed with /api/ to respective serviceId registered on consul: We tried to configure predicate to get path prefixed with api to rewrite path and remove api, but still it's not working. So there is another service /hello-service/ registered with consul server, but we want to do API call with /api/hello-service/
Redirect unmatched request to default path: We want to redirect all unmatched request to UI.
Redirecting HTTP to HTTPS on spring cloud gateway: We want to force all request coming to spring gateway to be https
Forwarding HTTPS request to HTTP serviceId registered with consul: Services registered with consul are on HTTP except for the API gateway, we want to be able to send HTTPS request to HTTP backend i.e. terminating HTTPS at API Gateway only.
Any help in solving the above issue would be good
Edit 1:
After some help from #spencergibb, we had setup the spring cloud gateway with https. But There are some additional issues which we faced
If HTTPS is enabled on both API gateway and service both, we received below error
javax.net.ssl.SSLException: handshake timed out at
io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
~[netty-handler-4.1.31.Final.jar:4.1.31.
If HTTPS is enabled on only API gateway, we received below error
There was an unexpected error (type=Not Found, status=404).
org.springframework.web.server.ResponseStatusException: 404 NOT_FOUND
and received
for path https://localhost:8443/api/hello-service/hello/message
Unable to Connect
for path http://localhost:8080/hello-service/hello/message
Please find the link for the sample applications
Instructions:
navigate to consul directory and Start consul server using command ./consul agent -dev
run api-gateway spring boot gradle project
run rest-demo spring boot gradle project
Edit 2
Thank You #spencergibb, We were able to successfully apply ssl on gateway and call the registered services on HTTP. Since Spring Webflux with Netty does not support listening on two ports, we created an additional tcp server bind to http port based on this answer.
There is still some issue we are facing with RewritePath for /api/ rule
predicates:
- name: Path
args:
pattern: "'/api/'+serviceId.toLowerCase()+'/**'"
filters:
- name: RewritePath
args:
regexp: "'/api/' + serviceId.toLowerCase() + '/(?<remaining>.*)'"
replacement: "'/${remaining}'"
below is the complete trace for the request
DEBUG 13-02-2019 03:32:01 [FilteringWebHandler.java:86] VTC : Sorted
gatewayFilterFactories:
[OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.AdaptCachedBodyGlobalFilter#257505fd},
order=-2147482648},
OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.GatewayMetricsFilter#400caab4},
order=-2147473648},
OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyWriteResponseFilter#36e2c50b},
order=-1},
OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardPathFilter#66f0c66d}, order=0},
OrderedGatewayFilter{delegate=org.springframework.cloud.gateway.filter.factory.RewritePathGatewayFilterFactory$$Lambda$360/1720581802#5821f2e6,
order=0},
OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RouteToRequestUrlFilter#27119239},
order=10000},
OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.LoadBalancerClientFilter#568a9d8f},
order=10100},
OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.WebsocketRoutingFilter#6ba77da3},
order=2147483646},
OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyRoutingFilter#73c24516},
order=2147483647},
OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardRoutingFilter#461a9938},
order=2147483647}] TRACE 13-02-2019 03:32:01
[RouteToRequestUrlFilter.java:59] VTC : RouteToRequestUrlFilter start
TRACE 13-02-2019 03:32:02 [NettyWriteResponseFilter.java:68] VTC :
NettyWriteResponseFilter start TRACE 13-02-2019 03:32:02
[GatewayMetricsFilter.java:101] VTC : Stopping timer
'gateway.requests' with tags
[tag(outcome=CLIENT_ERROR),tag(routeId=rewrite_response_upper),tag(routeUri=http://httpbin.org:80),tag(status=NOT_FOUN
A number of things were needed
disable http2
Disable ssl configuration of httpclient
Update locator predicates and filters to use verbose configuration.
Here is the resulting portions of application.yml
server:
port: 8443
http:
port: 8080
servlet:
# enable HTTP2
# http2:
# enabled: true
# enable compression
# ... removed for brevity
spring:
application:
name: api-gateway
cloud:
consul:
enabled: true
gateway:
# httpclient:
# ssl:
# handshake-timeout-millis: 10000
# close-notify-flush-timeout-millis: 3000
# close-notify-read-timeout-millis: 0
# routes:
# - id: ui_path_route
# predicates:
# - Path="'/**'"
# filters:
# - RewritePath="'/**'", "/ui"
discovery:
instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}
locator:
enabled: true
predicates:
- name: Path
args:
pattern: "'/api/' + serviceId + '/**'"
filters:
- name: RewritePath
args:
regexp: "'/api/' + serviceId + '/(?<remaining>.*)'"
replacement: "'/${remaining}'"
#... removed for brevity

spring eureka security Batch update failure with HTTP status code 401

I study spring cloud eureka , cloud and they works finely . But after adding security in eureka service , it met some errors .
All the code and errors details is in https://github.com/keryhu/eureka-security
The eureka service application.yml
security:
user:
name: user
password: password
eureka:
client:
registerWithEureka: false
fetchRegistry: false
server:
wait-time-in-ms-when-sync-empty: 0
And The config-service application.java
#SpringBootApplication
#EnableConfigServer
#EnableDiscoveryClient
config-service application.yml
eureka:
client:
registry-fetch-interval-seconds: 5
serviceUrl:
defaultZone: http://user:password#${domain.name:localhost}:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/spring-cloud-samples/config-repo
basedir: target/config
There is errors exported after starting the config-service:
2016-04-10 11:22:39.402 ERROR 80526 --- [get_localhost-3] c.n.e.cluster.ReplicationTaskProcessor : Batch update failure with HTTP status code 401; discarding 1 replication tasks
2016-04-10 11:22:39.402 WARN 80526 --- [get_localhost-3] c.n.eureka.util.batcher.TaskExecutors : Discarding 1 tasks of TaskBatchingWorker-target_localhost-3 due to permanent error
2016-04-10 11:23:09.411 ERROR 80526 --- [get_localhost-3] c.n.e.cluster.ReplicationTaskProcessor : Batch update failure with HTTP status code 401; discarding 1 replication tasks
2016-04-10 11:23:09.412 WARN 80526 --- [get_localhost-3] c.n.eureka.util.batcher.TaskExecutors : Discarding 1 tasks of TaskBatchingWorker-target_localhost-3 due to permanent error
2016-04-10 11:23:39.429 ERROR 80526 --- [get_localhost-3] c.n.e.cluster.ReplicationTaskProcessor : Batch update failure with HTTP status code 401; discarding 1 replication tasks
2016-04-10 11:23:39.430 WARN 80526 --- [get_localhost-3] c.n.eureka.util.batcher.TaskExecutors : Discarding 1 tasks of TaskBatchingWorker-target_localhost-3 due to permanent error
SET eureka.client.serviceUrl.defaultZone of eureka-server
http://username:password#localhost:8761/eureka/
I agree with jacky-fan answer.
These are how my working configuration looks like without username and password.
server application.yml
spring:
application:
name: eureka-service
server:
port: 8302
eureka:
client:
register-with-eureka: false
fetch-registry: false
service-url:
defaultZone: http://localhost:8302/eureka/
server:
wait-time-in-ms-when-sync-empty: 0
client application.yml
eureka:
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://localhost:8302/eureka/
instance:
hostname: localhost
spring:
application:
name: my-service
server:
port: 8301

Resources