Kafka disconnection with log Node 1 disconnected - spring-boot

When using kafka to communicate between services, I get the following logs. Then the events are not received by consumer or producer is not able to sends events:
2023-02-01 03:42:40.614 INFO 3524 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Node 1 disconnected.
2023-02-01 03:47:41.228 INFO 3524 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Node 0 disconnected.
2023-02-01 03:47:41.228 INFO 3524 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Cancelled in-flight METADATA request with correlation id 617 due to node 0 being disconnected (elapsed time since creation: 108ms, elapsed time since send: 108ms, request timeout: 2000ms)
2023-02-01 05:02:48.399 INFO 3524 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Node 2 disconnected.
Configuration looks like this:
package com.bbh.ilab.aip.presentationservice.config
import com.bbh.ilab.aip.commons.kafka.BaseEvent
import com.bbh.ilab.aip.commons.kafka.EventPayload
import org.apache.kafka.clients.producer.ProducerConfig
import org.apache.kafka.common.config.SslConfigs
import org.apache.kafka.common.serialization.StringSerializer
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.kafka.core.DefaultKafkaProducerFactory
import org.springframework.kafka.core.KafkaTemplate
import org.springframework.kafka.core.ProducerFactory
import org.springframework.kafka.support.serializer.JsonSerializer
#Configuration
class KafkaProducerConfig(
private val environmentProperties: EnvironmentProperties
) {
#Bean
fun producerFactory(): ProducerFactory<String, BaseEvent<EventPayload>> {
val configProps = getStandardConfig()
if (environmentProperties.kafka.sslEnabled) {
addSslConfig(configProps)
}
return DefaultKafkaProducerFactory(configProps)
}
#Bean
fun kafkaTemplate(): KafkaTemplate<String, BaseEvent<EventPayload>> {
return KafkaTemplate(producerFactory())
}
private fun getStandardConfig(): MutableMap<String, Any> {
val configProps: MutableMap<String, Any> = HashMap()
configProps[ProducerConfig.BOOTSTRAP_SERVERS_CONFIG] = environmentProperties.kafka.bootstrapUrl
configProps[ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java
configProps[ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG] = JsonSerializer::class.java
configProps[ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG] = environmentProperties.kafka.retry.deliveryTimeoutMs
configProps[ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG] = environmentProperties.kafka.retry.requestTimeoutMs
configProps[ProducerConfig.RETRY_BACKOFF_MS_CONFIG] = environmentProperties.kafka.retry.retryBackoffMs
return configProps
}
private fun addSslConfig(configProps: MutableMap<String, Any>) {
configProps["security.protocol"] = "SSL"
configProps[SslConfigs.SSL_KEYSTORE_TYPE_CONFIG] = "PKCS12"
configProps[SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG] = "PKCS12"
configProps[SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG] = environmentProperties.kafka.keystorePath
configProps[SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG] = environmentProperties.kafka.keystorePass
configProps[SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG] = environmentProperties.kafka.truststorePath
configProps[SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG] = environmentProperties.kafka.truststorePass
configProps[SslConfigs.SSL_KEY_PASSWORD_CONFIG] = environmentProperties.kafka.keystorePass
}
}
Events are sent by this code:
fun sendSplitEvent(configDto: ConfigDto, domain: Domain, timestamp: LocalDateTime) {
val payload = SplitColumnRequestPayload(
configDto.projectId,
domain,
configDto.targetColumn,
configDto.split
)
val event = SplitRequestEvent(UUID.randomUUID(), timestamp, payload)
kafkaTemplate.send(environmentProperties.kafka.topic, event as BaseEvent<EventPayload>)
}
I tried to changing configuration following: https://github.com/strimzi/strimzi-kafka-operator/issues/2729 but still not working. Please help me :)

Related

Query Graphql is not called

After migrating a kotlin project's spring boot version to 2.7.5, I needed to change some old dependencies.
After these updates, neither queries nor mutations are being called.
As a result, the graphql call returns null.
The application starts up normally, with no errors. And no error is logged when I call graphql through postman.
Below is my Resolver class
package io.company.op.vehicle.resolver
import io.company.op.vehicle.model.\*
import io.company.op.vehicle.model.dto.ModelConfigInput
import io.company.op.vehicle.pagination.ModelConfigPage
import io.company.op.vehicle.security.SecurityService
import io.company.op.vehicle.service.ModelConfigService
import io.company.op.vehicle.utils.formattedError
import org.slf4j.LoggerFactory
import org.springframework.graphql.data.method.annotation.MutationMapping
import org.springframework.graphql.data.method.annotation.QueryMapping
import org.springframework.security.access.prepost.PreAuthorize
import org.springframework.stereotype.Component
#Component
class ModelConfigResolver(
private val modelConfigService: ModelConfigService,
private val securityService: SecurityService
) {
#PreAuthorize("#securityService.canFetchModelConfig()")
#QueryMapping
fun fetchAllModelConfig(
searchTerm: String,
searchEngineTypeEmpty: Boolean,
searchLockLocationTypeEmpty: Boolean,
searchTransmissionTypeEmpty: Boolean,
engineTypes: List<EngineType>,
lockLocationTypes: List<LockLocationType>,
transmissionTypes: List<TransmissionType>,
status: List<ModelConfigStatus>,
page: Int,
pageSize: Int = 5
): ModelConfigPage? {
val user = securityService.getUser()
try {
return modelConfigService.fetchAllModelConfig(
searchTerm,
searchEngineTypeEmpty,
searchLockLocationTypeEmpty,
searchTransmissionTypeEmpty,
engineTypes,
lockLocationTypes,
transmissionTypes,
status,
page,
pageSize
)
} catch (ex: Exception) {
log.formattedError(
"fetchAllModelConfig",
user,
"SearchTerm: $searchTerm, EngineTypes: $engineTypes, LockLocationTypes: $lockLocationTypes, TransmissionTypes: $transmissionTypes, Status: $status",
ex
)
throw ex
}
}
#PreAuthorize("#securityService.canFetchModelConfig()")
#QueryMapping
fun fetchModelConfig(
vehicleModelId: Long,
vds: String,
modelYear: Int,
transmissionType: TransmissionType
): ModelConfig? {
val user = securityService.getUser()
try {
return modelConfigService.fetchModelConfig(
vehicleModelId, vds, modelYear, transmissionType
)
} catch (ex: Exception) {
log.formattedError(
"fetchModelConfig",
user,
"VehicleModelId: $vehicleModelId, vds: $vds, modelYear: $modelYear, TransmissionType: $transmissionType",
ex
)
throw ex
}
}
#PreAuthorize("#securityService.canFetchModelConfig()")
#QueryMapping
fun fetchModelConfigById(
id: Long
): ModelConfig? {
val user = securityService.getUser()
try {
return modelConfigService.fetchModelConfigById(id)
} catch (ex: Exception) {
log.formattedError(
"fetchModelConfig",
user,
"Id: $id",
ex
)
throw ex
}
}
#PreAuthorize("#securityService.canCreateOrUpdateModelConfig()")
#MutationMapping
fun createOrUpdateModelConfig(modelConfigInput: ModelConfigInput): ModelConfig {
val user = securityService.getUser()
try {
return modelConfigService.createOrUpdateModelConfig(modelConfigInput, user)
} catch (ex: Exception) {
log.formattedError(
"createOrUpdateModelConfig",
user,
"ModelConfigInput: $modelConfigInput",
ex
)
throw ex
}
}
#PreAuthorize("#securityService.canHomologateModelConfig()")
#MutationMapping
fun homologateModelConfig(modelConfigInput: ModelConfigInput): ModelConfig {
val user = securityService.getUser()
try {
return modelConfigService.homologateModelConfig(modelConfigInput, user)
} catch (ex: Exception) {
log.formattedError(
"homologateModelConfig",
user,
"ModelConfigInput: $modelConfigInput",
ex
)
throw ex
}
}
#PreAuthorize("#securityService.canDeleteModelConfig()")
#MutationMapping
fun deleteModelConfig(id: Long): ModelConfig {
val user = securityService.getUser()
try {
return modelConfigService.deleteModelConfig(id, user)
} catch (ex: Exception) {
log.formattedError(
"deleteModelConfig",
user,
"ModelConfigId: $id",
ex
)
throw ex
}
}
companion object {
val log = LoggerFactory.getLogger(ModelConfigResolver::class.java)!!
}
}
SecurityConfiguration Class
package io.company.op.vehicle.security
import com.fasterxml.jackson.module.kotlin.jacksonObjectMapper
import io.company.cd.common.spring.auth.AuthFilter
import io.company.cd.common.spring.auth.domain.TokenType
import org.springframework.beans.factory.annotation.Value
import org.springframework.context.annotation.Configuration
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity
import org.springframework.security.config.annotation.web.builders.HttpSecurity
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter
import org.springframework.security.config.http.SessionCreationPolicy
import org.springframework.security.web.authentication.preauth.RequestHeaderAuthenticationFilter
#Configuration
#EnableWebSecurity
#EnableGlobalMethodSecurity(prePostEnabled = true)
class SecurityConfiguration : WebSecurityConfigurerAdapter() {
#Value("\${vehicle.op.jwk.key-set}")
private lateinit var keySet: List<String>
#Value("\${vehicle.op.jwt.token-types}")
private lateinit var jsonTokenTypes: String
protected val mapper = jacksonObjectMapper()
override fun configure(http: HttpSecurity) {
val tokenTypes: List<TokenType> = mapper.readValue(jsonTokenTypes, Array<TokenType>::class.java).toList()
http.csrf().disable()
.authorizeRequests()
.antMatchers("/graphql").permitAll()
.and()
.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS)
.and()
.addFilterBefore(
AuthFilter.create(keySet, tokenTypes),
RequestHeaderAuthenticationFilter::class.java
)
}
}
SecurityService Class
package io.company.op.vehicle.security
import io.compaompanyny.cd.common.spring.auth.domain.IntegrationToken
import io.company.cd.common.spring.auth.domain.CToken
import io.company.cd.common.spring.auth.domain.UnrestrictedToken
import io.company.op.vehicle.exception.VehicleErrorType
import io.company.op.vehicle.exception.VehicleException
import org.slf4j.LoggerFactory
import org.springframework.security.core.context.SecurityContextHolder
import org.springframework.stereotype.Service
#Service
class SecurityService {
fun getUser(): User {
return when (val token = SecurityContextHolder.getContext().authentication.principal) {
is CompanyToken ->
User(
username = token.username,
type = UserType.COMPANY
)
is UnrestrictedToken ->
User(
username = "Unrestricted",
type = UserType.UNRESTRICTED
)
is IntegrationToken ->
User(
username = "Integration",
type = UserType.INTEGRATION
)
else -> throw VehicleException(VehicleErrorType.UNAUTHORIZED)
}
}
fun canFetchModelConfig(): Boolean {
val user = getUser()
log.debug("Validating fetch model config for User: ${user.username}")
return when (user.type) {
UserType.COMPANY -> true
UserType.UNRESTRICTED -> true
UserType.INTEGRATION -> true
}
}
fun canCreateOrUpdateModelConfig(): Boolean {
val user = getUser()
log.debug("Validating create model config for User: ${user.username}")
return when (user.type) {
UserType.COMPANY -> true
UserType.UNRESTRICTED -> true
UserType.INTEGRATION -> true
}
}
fun canDeleteModelConfig(): Boolean {
val user = getUser()
log.debug("Validating delete model config for User: ${user.username}")
return when (user.type) {
UserType.COMPANY -> true
UserType.UNRESTRICTED -> true
UserType.INTEGRATION -> true
}
}
fun canHomologateModelConfig(): Boolean {
val user = getUser()
log.debug("Validating homologate model config for User: ${user.username}")
return when (user.type) {
UserType.COMPANY -> true
UserType.UNRESTRICTED -> true
UserType.INTEGRATION -> true
}
}
fun canFetchVehicleModel(): Boolean {
val user = getUser()
log.debug("Validating fetch vehicle model for User: ${user.username}")
return when (user.type) {
UserType.COMPANY -> true
UserType.UNRESTRICTED -> true
UserType.INTEGRATION -> true
}
}
fun canFetchVehicleBrand(): Boolean {
val user = getUser()
log.debug("Validating fetch vehicle brand for User: ${user.username}")
return when (user.type) {
UserType.COMPANY -> true
UserType.UNRESTRICTED -> true
UserType.INTEGRATION -> true
}
}
companion object {
val log = LoggerFactory.getLogger(SecurityService::class.java)!!
}
}
Application Log
2022-12-08 18:17:42.748 INFO 8516 --- \[ main\] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo \[name: default\]
2022-12-08 18:17:42.810 INFO 8516 --- \[ main\] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.6.12.Final
2022-12-08 18:17:43.005 INFO 8516 --- \[ main\] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
2022-12-08 18:17:43.152 INFO 8516 --- \[ main\] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.PostgreSQL9Dialect
2022-12-08 18:17:43.580 INFO 8516 --- \[ main\] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: \[org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform\]
2022-12-08 18:17:43.592 INFO 8516 --- \[ main\] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2022-12-08 18:17:43.883 WARN 8516 --- \[ main\] .s.s.UserDetailsServiceAutoConfiguration :
Using generated security password: 96d94b82-7cd1-431c-a831-3ed6d56519bc
This generated password is for development use only. Your security configuration must be updated before running your application in production.
2022-12-08 18:17:44.091 INFO 8516 --- \[ main\] o.s.s.web.DefaultSecurityFilterChain : Will secure any request with \[org.springframework.security.web.session.DisableEncodeUrlFilter#32db94fb, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter#6b61a4b0, org.springframework.security.web.context.SecurityContextPersistenceFilter#570089c4, org.springframework.security.web.header.HeaderWriterFilter#40e90634, org.springframework.security.web.authentication.logout.LogoutFilter#34538ffe, io.company.cd.common.spring.auth.AuthFilter#5c00de0d, org.springframework.security.web.savedrequest.RequestCacheAwareFilter#6e818345, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter#6e2e11ee, org.springframework.security.web.authentication.AnonymousAuthenticationFilter#6278371a, org.springframework.security.web.session.SessionManagementFilter#b6fea12, org.springframework.security.web.access.ExceptionTranslationFilter#655d5285, org.springframework.security.web.access.intercept.FilterSecurityInterceptor#64e680c5\]
2022-12-08 18:17:45.419 INFO 8516 --- \[ main\] s.b.a.g.s.GraphQlWebMvcAutoConfiguration : GraphQL endpoint HTTP POST /graphql
2022-12-08 18:17:45.981 INFO 8516 --- \[ main\] o.s.b.a.e.web.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path '/actuator'
2022-12-08 18:17:46.077 INFO 8516 --- \[ main\] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 9090 (http) with context path ''
2022-12-08 18:17:46.094 INFO 8516 --- \[ main\] io.company.op.vehicle.VehicleApplicationKt : Started VehicleApplicationKt in 8.186 seconds (JVM running for 9.011)
2022-12-08 18:17:58.028 INFO 8516 --- \[nio-9090-exec-2\] o.a.c.c.C.\[Tomcat\].\[localhost\].\[/\] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2022-12-08 18:17:58.029 INFO 8516 --- \[nio-9090-exec-2\] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2022-12-08 18:17:58.031 INFO 8516 --- \[nio-9090-exec-2\] o.s.web.servlet.DispatcherServlet : Completed initialization in 2 ms
When calling the query fetchAllModelConfig it returns this:
{
"data": {
"fetchAllModelConfig": null
}
}

route FROM and route TO with spring cloud stream and functions

I have some issues with the new routing feature in spring cloud stream
I tried to implement a simple scenario, I want to send a message with a header spring.cloud.function.definition = consume1 or consume2
I expect that consume1 or consume2 should be called based on what is sent on the header but the methods are called randomly.
I send the message to the exchange consumer using the rabbit admin console
I'm having the following logs:
2020-02-27 14:48:25.896 INFO 22132 --- [ consumer.app-1] com.example.demo.TestConsumer : ==============>consume1 messge [[payload=ok, headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=#, amqp_receivedExchange=consumer, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=consumer.app, amqp_redelivered=false, id=9a4dff25-88ef-4d76-93e2-c8719cda122d, spring.cloud.function.definition=consume1, amqp_consumerTag=amq.ctag-gGChFNCKIVd25yyR9H6-fQ, sourceData=(Body:'[B#3a92faa7(byte[2])' MessageProperties [headers={spring.cloud.function.definition=consume1}, contentLength=0, receivedDeliveryMode=NON_PERSISTENT, redelivered=false, receivedExchange=consumer, receivedRoutingKey=#, deliveryTag=1, consumerTag=amq.ctag-gGChFNCKIVd25yyR9H6-fQ, consumerQueue=consumer.app]), timestamp=1582811303347}]]
2020-02-27 14:48:25.984 INFO 22132 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-02-27 14:48:25.984 INFO 22132 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2020-02-27 14:48:25.991 INFO 22132 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 7 ms
2020-02-27 14:48:26.037 INFO 22132 --- [oundedElastic-1] o.s.i.monitor.IntegrationMBeanExporter : Registering MessageChannel customer-1
2020-02-27 14:48:26.111 INFO 22132 --- [oundedElastic-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'application.customer-1' has 1 subscriber(s).
2020-02-27 14:48:26.116 INFO 22132 --- [oundedElastic-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2020-02-27 14:48:26.123 INFO 22132 --- [oundedElastic-1] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory.publisher#32438e24:0/SimpleConnection#3e58666d [delegate=amqp://guest#127.0.0.1:5672/, localPort= 62514]
2020-02-27 14:48:26.139 INFO 22132 --- [-1.customer-1-1] o.s.i.h.s.MessagingMethodInvokerHelper : Overriding default instance of MessageHandlerMethodFactory with provided one.
2020-02-27 14:48:26.140 INFO 22132 --- [-1.customer-1-1] com.example.demo.TestSink : Data received customer-1...body
2020-02-27 14:49:14.185 INFO 22132 --- [ consumer.app-1] o.s.i.h.s.MessagingMethodInvokerHelper : Overriding default instance of MessageHandlerMethodFactory with provided one.
2020-02-27 14:49:14.194 INFO 22132 --- [ consumer.app-1] com.example.demo.TestConsumer : ==============>consume2 messge [[payload=ok, headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=#, amqp_receivedExchange=consumer, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=consumer.app, amqp_redelivered=false, id=33581edb-2832-1c92-b765-a05794512b34, spring.cloud.function.definition=consume1, amqp_consumerTag=amq.ctag-RIp2nZdcG2a0hNQeImwtBw, sourceData=(Body:'[B#8159793(byte[2])' MessageProperties [headers={spring.cloud.function.definition=consume1}, contentLength=0, receivedDeliveryMode=NON_PERSISTENT, redelivered=false, receivedExchange=consumer, receivedRoutingKey=#, deliveryTag=1, consumerTag=amq.ctag-RIp2nZdcG2a0hNQeImwtBw, consumerQueue=consumer.app]), timestamp=1582811354186}]]
2020-02-27 14:49:14.203 INFO 22132 --- [oundedElastic-1] o.s.i.monitor.IntegrationMBeanExporter : Registering MessageChannel customer-2
2020-02-27 14:49:14.213 INFO 22132 --- [oundedElastic-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'application.customer-2' has 1 subscriber(s).
2020-02-27 14:49:14.216 INFO 22132 --- [-2.customer-2-1] o.s.i.h.s.MessagingMethodInvokerHelper : Overriding default instance of MessageHandlerMethodFactory with provided one.
2020-02-27 14:49:14.216 INFO 22132 --- [-2.customer-2-1] com.example.demo.TestSink : Data received customer-2...body
application.yml
spring:
main:
allow-bean-definition-overriding: true
spring.cloud.stream:
function.definition: supplier;receive1;receive2;consume1;consume2
function.routing:
enabled: true
bindings:
consume1-in-0.destination: consumer
consume1-in-0.group: app
consume2-in-0.destination: consumer
consume2-in-0.group: app
receive1-in-0.destination: customer-1
receive1-in-0.group: customer-1
receive2-in-0.destination: customer-2
receive2-in-0.group: customer-2
DemoApplication.java
import com.fasterxml.jackson.databind.ObjectMapper
import org.apache.commons.logging.Log
import org.apache.commons.logging.LogFactory
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication
import org.springframework.context.annotation.Bean
import org.springframework.http.HttpStatus
import org.springframework.messaging.Message
import org.springframework.messaging.support.MessageBuilder
import org.springframework.stereotype.Component
import org.springframework.web.bind.annotation.PathVariable
import org.springframework.web.bind.annotation.RequestMapping
import org.springframework.web.bind.annotation.RequestMethod.GET
import org.springframework.web.bind.annotation.ResponseStatus
import org.springframework.web.bind.annotation.RestController
import org.springframework.web.client.RestTemplate
import reactor.core.publisher.EmitterProcessor
import reactor.core.publisher.Flux
import java.util.function.Consumer
import java.util.function.Supplier
#SpringBootApplication
class DemoApplication
fun main(args: Array<String>) {
runApplication<DemoApplication>(*args)
}
#RestController
class DynamicDestinationController(private val jsonMapper: ObjectMapper) {
private val processor: EmitterProcessor<Message<String>> = EmitterProcessor.create<Message<String>>()
#RequestMapping(path = ["/api/dest/{destName}"], method = [GET], consumes = ["*/*"])
#ResponseStatus(HttpStatus.ACCEPTED)
fun handleRequest(#PathVariable destName:String) {
val message: Message<String> = MessageBuilder.withPayload("body")
.setHeader("spring.cloud.stream.sendto.destination", destName).build()
processor.onNext(message)
}
#Bean
fun supplier(): Supplier<Flux<Message<String>>> {
return Supplier { processor }
}
}
const val destResourceUrl = "http://localhost:8080/api/dest"
#Component
class TestConsumer() {
private val restTemplate: RestTemplate = RestTemplate()
private val logger: Log = LogFactory.getLog(javaClass)
#Bean
fun consume1(): Consumer<Message<String>> = Consumer {
logger.info("==============>consume1 messge [[payload=${it.payload}, headers=${it.headers}]]")
restTemplate.getForEntity("$destResourceUrl/customer-1", String::class.java)
}
#Bean
fun consume2(): Consumer<Message<String>> = Consumer {
logger.info("==============>consume2 messge [[payload=${it.payload}, headers=${it.headers}]]")
restTemplate.getForEntity("$destResourceUrl/customer-2", String::class.java)
}
}
#Component
class TestSink {
private val logger: Log = LogFactory.getLog(javaClass)
#Bean
fun receive1(): Consumer<String> = Consumer {
logger.info("Data received customer-1..." + it);
}
#Bean
fun receive2(): Consumer<String> = Consumer {
logger.info("Data received customer-2..." + it);
}
}
Any idea how to fix the route to consumer?
thanks in advance.
demo-repo
Actually I am a bit confused, so let's do one step at the time. Here is the functioning (modelled after yours) app which uses sendto feature allowing you to send messages to the specific (existing and/or dynamically resolved) destinations.
(in java but you can rework it to Kotlin)
#Controller
public class WebSourceApplication {
public static void main(String[] args) {
SpringApplication.run(WebSourceApplication.class,
"--spring.cloud.function.definition=supplier;consA;consB",
"--spring.cloud.stream.bindings.consA-in-0.destination=consumerA",
"--spring.cloud.stream.bindings.consA-in-0.group=consumerA-grp",
"--spring.cloud.stream.bindings.consB-in-0.destination=consumerB",
"--spring.cloud.stream.bindings.consB-in-0.group=consumerB-grp"
);
}
EmitterProcessor<Message<String>> processor = EmitterProcessor.create();
#RequestMapping(path = "/api/dest/{destName}", consumes = "*/*")
#ResponseStatus(HttpStatus.ACCEPTED)
public void delegateToSupplier(#RequestBody String body, #PathVariable String destName) {
Message<String> message = MessageBuilder.withPayload(body)
.setHeader("spring.cloud.stream.sendto.destination", destName)
.build();
processor.onNext(message);
}
#Bean
public Supplier<Flux<Message<String>>> supplier() {
return () -> processor;
}
#Bean
public Consumer<String> consA() {
return v -> {
System.out.println("Consuming from consA: " + v);
};
}
#Bean
public Consumer<String> consB() {
return v -> {
System.out.println("Consuming from consB: " + v);
};
}
}
And when i curl it i get consistent invocation pr the appropriate consumer:
curl -H "Content-Type: application/json" -X POST -d "Hello Spring Cloud Stream" http://localhost:8080/api/dest/consumerA
log: Consuming from consA: Hello Spring Cloud Stream
. . .
curl -H "Content-Type: application/json" -X POST -d "Hello Spring Cloud Stream" http://localhost:8080/api/dest/consumerB
log: Consuming from consB: Hello Spring Cloud Stream
Notice: There is no enable routing property. That feature is mainly aimed to always call one function functionRouter and have it call other functions on your behalf. It is a feature of spring-cloud-function which means it works outside of spring-cloud-srteam and channels/destinations etc.
Isn't that what you are trying to accomplish? Send message to a different destination based on some oath variable in your HTTP request?
Here is an example of a different microservice which receives on routing function which hen routes to different functions
public class FunctionRoutingApplication {
public static void main(String[] args) {
SpringApplication.run(FunctionRoutingApplication.class,
"--spring.cloud.stream.function.routing.enabled=true"
);
}
#Bean
public Consumer<String> consA() {
return v -> {
System.out.println("Consuming from consA: " + v);
};
}
#Bean
public Consumer<String> consB() {
return v -> {
System.out.println("Consuming from consB: " + v);
};
}
}
And that's pretty much it. Go to your broker and send data to functionRouter-in-0 exchange while providing spring.cloud.function.definition=consA/consB headers and you will see consistent invocations.
Am I still missing something?

Spring kafka no message received

I'm using spring boot 2.2.4-RELEASE, spring-kafka 2.4.2.RELEASE
My scenario is the following one:
In my microservice (let's call it producer microservice) I need to create kakfa topic and then, on some circumstances, I need to send message over a single topic.
This message must be received and handled by another microservice (let's call it consumer microservice). In this consumer microservice I must create kafka-listener every time a new topic is created on the server side.
So I wrote the folloqing code
producer microservice
spring kafka config:
#Configuration
public class WebmailKafkaConfig {
#Autowired
private Environment environment;
#Bean
public KafkaAdmin kafkaAdmin(){
Map<String, Object> configuration = new HashMap<String, Object>();
configuration.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, environment.getProperty("webmail.be.messaging.kafka.bootstrap.address"));
KafkaAdmin result = new KafkaAdmin(configuration);
return result;
}
#Bean
public ProducerFactory<String, RicezioneMailMessage> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put( ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, environment.getProperty("webmail.be.messaging.kafka.bootstrap.address"));
configProps.put( ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//configProps.put( ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put( ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean("ricezioneMailMessageKafkaTemplate")
public KafkaTemplate<String, RicezioneMailMessage> ricezioneMailMessageKafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
spring service kafka manager
#Service
public class WebmailKafkaTopicSvcImpl implements WebmailKafkaTopicSvc {
private static final Logger logger = LoggerFactory.getLogger(WebmailKafkaTopicSvcImpl.class.getName());
#Autowired
private KafkaAdmin kafkaAdmin;
#Value("${webmail.be.messaging.kafka.topic.numero.partizioni}")
private int numeroPartizioni;
#Value("${webmail.be.messaging.kafka.topic.fattore.replica}")
private short fattoreReplica;
#Autowired
#Qualifier("ricezioneMailMessageKafkaTemplate")
private KafkaTemplate<String, RicezioneMailMessage> ricezioneMailMessageKafkaTemplate;
#Override
public void createKafkaTopic(String topicName) throws Exception {
if(!StringUtils.hasText(topicName)){
throw new IllegalArgumentException("Passato un topic name non valido ["+topicName+"]");
}
AdminClient adminClient = null;
try{
adminClient = AdminClient.create(kafkaAdmin.getConfig());
List<NewTopic> topics = new ArrayList<>(1);
NewTopic topic = new NewTopic(topicName, numeroPartizioni, fattoreReplica);
topics.add(topic);
CreateTopicsResult result = adminClient.createTopics(topics);
result.all().whenComplete(new KafkaFuture.BiConsumer<Void, Throwable>() {
#Override
public void accept(Void aVoid, Throwable throwable) {
if( throwable != null ){
logger.error("Errore creazione topic", throwable);
}
}
});
}finally {
if( adminClient != null ){
adminClient.close();
}
}
}
#Override
public void sendMessage(RicezioneMailMessage rmm) throws Exception {
ListenableFuture<SendResult<String, RicezioneMailMessage>> future = ricezioneMailMessageKafkaTemplate.send(rmm.getPk(), rmm);
future.addCallback(new ListenableFutureCallback<SendResult<String, RicezioneMailMessage>>() {
#Override
public void onFailure(Throwable ex) {
if( logger.isWarnEnabled() ){
logger.warn("Impossibile inviare il messaggio=["
+ rmm + "] a causa di : " + ex.getMessage(),ex);
}
}
#Override
public void onSuccess(SendResult<String, RicezioneMailMessage> result) {
if(logger.isTraceEnabled()){
logger.trace("Inviato messaggio=[" + rmm +
"] con offset=[" + result.getRecordMetadata().offset() + "]");
}
}
});
}
}
In the producer side all works pretty good. I'm able in creating topics and sending messages.
consumer microservice
dynamic listener class
public class DynamicKafkaConsumer {
private final String brokerAddress;
private final String topicName;
private boolean stopTest;
private static final Logger logger = LoggerFactory.getLogger(DynamicKafkaConsumer.class.getName());
public DynamicKafkaConsumer(String brokerAddress, String topicName) {
if( !StringUtils.hasText(brokerAddress)){
throw new IllegalArgumentException("Passato un broker address non valido");
}
if( !StringUtils.hasText(topicName)){
throw new IllegalArgumentException("Passato un topicName non valido");
}
this.brokerAddress = brokerAddress;
this.topicName = topicName;
if( logger.isTraceEnabled() ){
logger.trace("Creato {} con topicName {} e brokerAddress {}", this.getClass().getName(), this.topicName, this.brokerAddress);
}
}
public final void start() {
MessageListener<String, RicezioneMailMessage> messageListener = (record -> {
RicezioneMailMessage messaggioRicevuto = record.value();
if( logger.isInfoEnabled() ){
logger.info("Ricevuto messaggio {} su topic {}", messaggioRicevuto, topicName);
}
stopTest = true;
});
ConcurrentMessageListenerContainer<String, RicezioneMailMessage> container =
new ConcurrentMessageListenerContainer<>(
consumerFactory(brokerAddress),
containerProperties(topicName, messageListener));
container.start();
}
private DefaultKafkaConsumerFactory<String, RicezioneMailMessage> consumerFactory(String brokerAddress) {
return new DefaultKafkaConsumerFactory<>(
consumerConfig(brokerAddress),
new StringDeserializer(),
new JsonDeserializer<>(RicezioneMailMessage.class));
}
private ContainerProperties containerProperties(String topic, MessageListener<String, RicezioneMailMessage> messageListener) {
ContainerProperties containerProperties = new ContainerProperties(topic);
containerProperties.setMessageListener(messageListener);
return containerProperties;
}
private Map<String, Object> consumerConfig(String brokerAddress) {
return Map.of(
BOOTSTRAP_SERVERS_CONFIG, brokerAddress,
GROUP_ID_CONFIG, "groupId",
AUTO_OFFSET_RESET_CONFIG, "earliest",
ALLOW_AUTO_CREATE_TOPICS_CONFIG, "false"
);
}
public boolean isStopTest() {
return stopTest;
}
public void setStopTest(boolean stopTest) {
this.stopTest = stopTest;
}
}
simple unit test
public class TestRicezioneMessaggiCasellaPostale {
private static final Logger logger = LoggerFactory.getLogger(TestRicezioneMessaggiCasellaPostale.class.getName());
#Test
public void testRicezioneMessaggiMail() {
try {
String brokerAddress = "localhost:9092";
DynamicKafkaConsumer consumer = new DynamicKafkaConsumer(brokerAddress, "f586caf2-ffdc-4e3a-88b9-a262a502f8ac");
consumer.start();
boolean stopTest = consumer.isStopTest();
while (!stopTest) {
stopTest = consumer.isStopTest();
}
} catch (Exception e) {
logger.error("Errore nella configurazione della casella postale; {}", e.getMessage(), e);
}
}
}
In the consumer side I can't read any message; note that the topic "f586caf2-ffdc-4e3a-88b9-a262a502f8ac" exsists and it's the same topic used on the producer side.
When I send a message on the producer side I can see this log:
2020-02-19 22:00:22,320 52822 [kafka-producer-network-thread | producer-1] TRACE i.e.t.r.p.w.b.s.i.WebmailKafkaTopicSvcImpl - Inviato messaggio=[RicezioneMailMessage{pk='c5c8f8a4-8ddd-407a-9e51-f6b14d84f304', tipoMessaggio='mail'}] con offset=[0]
On the consumer side I don't see any message. I just see the following prints:
2020-02-19 22:00:03,194 1442 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
allow.auto.create.topics = false
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = groupId
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes
= 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
2020-02-19 22:00:03,630 1878 [main] INFO o.a.k.common.utils.AppInfoParser - Kafka version: 2.4.0
2020-02-19 22:00:03,630 1878 [main] INFO o.a.k.common.utils.AppInfoParser - Kafka commitId: 77a89fcf8d7fa018
2020-02-19 22:00:03,630 1878 [main] INFO o.a.k.common.utils.AppInfoParser - Kafka startTimeMs: 1582146003626
2020-02-19 22:00:03,636 1884 [main] INFO o.a.k.c.consumer.KafkaConsumer - [Consumer clientId=consumer-groupId-1, groupId=groupId] Subscribed to topic(s): f586caf2-ffdc-4e3a-88b9-a262a502f8ac
2020-02-19 22:00:03,645 1893 [main] INFO o.s.s.c.ThreadPoolTaskScheduler - Initializing ExecutorService
2020-02-19 22:00:03,667 1915 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:04,123 2371 [consumer-0-C-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-groupId-1, groupId=groupId] Cluster ID: hOOJH-WNTNiXD4il0Y7_0Q
2020-02-19 22:00:05,052 3300 [consumer-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null)
2020-02-19 22:00:05,059 3307 [consumer-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] (Re-)joining group
2020-02-19 22:00:05,116 3364 [consumer-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] (Re-)joining group
2020-02-19 22:00:05,154 3402 [consumer-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Finished assignment for group at generation 1: {consumer-groupId-1-41df9153-7c33-46b1-8274-2d7ee2bfb35c=org.apache.kafka.clients.consumer.ConsumerPartitionAssignor$Assignment#a95df1b}
2020-02-19 22:00:05,327 3575 [consumer-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Successfully joined group with generation 1
2020-02-19 22:00:05,335 3583 [consumer-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Adding newly assigned partitions: f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0
2020-02-19 22:00:05,363 3611 [consumer-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Found no committed offset for partition f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0
2020-02-19 22:00:05,401 3649 [consumer-0-C-1] INFO o.a.k.c.c.i.SubscriptionState - [Consumer clientId=consumer-groupId-1, groupId=groupId] Resetting offset for partition f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0 to offset 0.
2020-02-19 22:00:05,404 3652 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Committing on assignment: {f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}}
2020-02-19 22:00:05,432 3680 [consumer-0-C-1] INFO o.s.k.l.ConcurrentMessageListenerContainer - groupId: partitions assigned: [f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0]
2020-02-19 22:00:08,669 6917 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:08,670 6918 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:13,671 11919 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:13,671 11919 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:18,673 16921 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:18,673 16921 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:23,674 21922 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:23,674 21922 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:28,676 26924 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:28,676 26924 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:33,677 31925 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:33,677 31925 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:38,678 36926 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:38,678 36926 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:43,678 41926 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:43,679 41927 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
Can anybody tell me where I'm wrong?
Thank you
Angelo
I found the root cause of your code.
Code to send message and log from client side:
ricezioneMailMessageKafkaTemplate.send(rmm.getPk(), rmm);
Inviato messaggio=[RicezioneMailMessage{pk='c5c8f8a4-8ddd-407a-9e51-f6b14d84f304', tipoMessaggio='mail'}] con offset=[0]
Code and log from consumer:
DynamicKafkaConsumer consumer = new DynamicKafkaConsumer(brokerAddress, "f586caf2-ffdc-4e3a-88b9-a262a502f8ac");
2020-02-19 22:00:03,636 1884 [main] INFO o.a.k.c.consumer.KafkaConsumer - [Consumer clientId=consumer-groupId-1, groupId=groupId] Subscribed to topic(s): f586caf2-ffdc-4e3a-88b9-a262a502f8ac
You are sending to topic: c5c8f8a4-8ddd-407a-9e51-f6b14d84f304
You are listening on topic: f586caf2-ffdc-4e3a-88b9-a262a502f8ac
Producer/consumer are sending/listening on difference topics.

My tcp client using spring integration not able to get response

I have created tcp client using spring integration I am able to receive response for my send message . But when I uses localDateTime.now() to log time I am not able to receive the response of send message . I know this can be solved using time setting to make thread wait. As I am new to spring integration So kindly help me how to do it.
#Configuration
#ComponentScan
#EnableAutoConfiguration
public class Test
{
protected final Log logger = LogFactory.getLog(this.getClass());
// **************** Client **********************************************
#Bean
public MessageChannel replyChannel()
{
return new DirectChannel();
}
#Bean
public MessageChannel sendChannel()
{
MessageChannel directChannel = new DirectChannel();
return directChannel;
}
#EnableIntegration
#IntegrationComponentScan
#Configuration
public static class config
{
#MessagingGateway(defaultRequestChannel = "sendChannel", defaultReplyChannel = "replyChannel")
public interface Gateway
{
String Send(String in);
}
}
#Bean
AbstractClientConnectionFactory tcpNetClientConnectionFactory()
{
AbstractClientConnectionFactory tcpNetClientConnectionFactory = new TcpNetClientConnectionFactory("localhost",
9999);
tcpNetClientConnectionFactory.setSerializer(new UCCXImprovedSerializer());
tcpNetClientConnectionFactory.setDeserializer(new UCCXImprovedSerializer());
tcpNetClientConnectionFactory.setSingleUse(true);
tcpNetClientConnectionFactory.setMapper(new TcpMessageMapper());
return tcpNetClientConnectionFactory;
}
#Bean
#ServiceActivator(inputChannel = "sendChannel")
TcpOutboundGateway tcpOutboundGateway()
{
TcpOutboundGateway tcpOutboundGateway = new TcpOutboundGateway();
tcpOutboundGateway.setConnectionFactory(tcpNetClientConnectionFactory());
tcpOutboundGateway.setReplyChannel(replyChannel());
return tcpOutboundGateway;
}
public static void main(String args[])
{
// new LegaServer();
ConfigurableApplicationContext applicationContext = SpringApplication.run(Test.class, args);
String temp = applicationContext.getBean(Gateway.class).Send("kksingh");
System.out.println(LocalDateTime.now()+"output" + temp);
applicationContext.stop();
}
}
My custom serialzer and deserialser UCCXImprovedSerializerclass
after updating as per #Garry
public class UCCXImprovedSerializer implements Serializer<String>, Deserializer<String>
{
#Override
public String deserialize(InputStream initialStream) throws IOException
{
System.out.println("deserialzier called");
StringBuilder sb = new StringBuilder();
try (BufferedReader rdr = new BufferedReader(new InputStreamReader(initialStream)))
{
for (int c; (c = rdr.read()) != -1;)
{
sb.append((char) c);
}
}
return sb.toString();
}
#Override
public void serialize(String msg, OutputStream os) throws IOException
{
System.out.println(msg + "---serialize---" + Thread.currentThread().getName() + "");
os.write(msg.getBytes());
}
}
My server at port 9999 code
try
{
clientSocket = echoServer.accept();
System.out.println("client connection established..");
is = new DataInputStream(clientSocket.getInputStream());
os = new PrintStream(clientSocket.getOutputStream());
String tempString = "kksingh";
byte[] tempStringByte = tempString.getBytes();
byte[] temp = new byte[tempString.getBytes().length];
while (true)
{
is.read(temp);
System.out.println(new String(temp) + "--received msg is--- " + LocalDateTime.now());
System.out.println(LocalDateTime.now() + "sending value");
os.write(tempStringByte);
break;
}
} catch (IOException e)
{
System.out.println(e);
}
}
My log file for tcp client
2017-06-04 23:10:14.771 INFO 15568 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.endpoint.EventDrivenConsumer#1f12e153
kksingh---serialize---main
pool-1-thread-1---deserialize----
pool-1-thread-1---deserialize----
pool-1-thread-1---deserialize----
pool-1-thread-1---deserialize----
2017-06-04 23:10:14.812 ERROR 15568 --- [pool-1-thread-1] o.s.i.ip.tcp.TcpOutboundGateway : Cannot correlate response - no pending reply for localhost:9999:57622:bc98ee29-8957-47bd-bd8a-f734c3ec3f9d
2017-06-04T23:10:14.809output
2017-06-04 23:10:14.821 INFO 15568 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 0
My log file for server side
client connection established..
kksingh--received msg is--- 2017-06-04T23:10:14.899
2017-06-04T23:10:14.899sending value
when I removed the localdatetime.now() from server and tcpclient I am able to get response as outputkksingh
o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2017-06-05 12:46:32.494 INFO 29076 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'application.errorChannel' has 1 subscriber(s).
2017-06-05 12:46:32.495 INFO 29076 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2017-06-05 12:46:32.746 INFO 29076 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2017-06-05 12:46:32.753 INFO 29076 --- [ main] o.s.i.samples.tcpclientserver.Test : Started Test in 2.422 seconds (JVM running for 2.716)
2017-06-05 12:46:32.761 INFO 29076 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {bridge:null} as a subscriber to the 'replyChannel' channel
2017-06-05 12:46:32.762 INFO 29076 --- [ main] o.s.integration.channel.DirectChannel : Channel 'application.replyChannel' has 1 subscriber(s).
2017-06-05 12:46:32.763 INFO 29076 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.endpoint.EventDrivenConsumer#1f12e153
kksingh---serialize---main
pool-1-thread-1---deserialize----kksingh
outputkksingh
2017-06-05 12:46:32.837 INFO 29076 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 0
2017-06-05 12:46:32.839 INFO 29076 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Removing {bridge:null} as a subscriber to the 'replyChannel' channel
2017-06-05 12:46:32.839 INFO 29076 --- [
Your deserializer is deserializing multiple packets...
pool-1-thread-1---deserialize----
pool-1-thread-1---deserialize----
pool-1-thread-1---deserialize----
pool-1-thread-1---deserialize----
Which produces 4 reply messsages; the gateway can only handle one reply which is why you see that ERROR message.
You deserializer needs to be smarter than just capturing "available" bytes. You need something in the message to indicate the end of the data (or close the socket to indicate the end).

Spring Cloud: Eureka Client registration/deregistration cycle

To familiarize myself with Spring Cloud's Eureka client/server mechanism, I try to connect a client to the Eureka server and toggle the connection on/off every 5 minutes to see how the Eureka server handles this.
I have two Eureka clients.
The first on is giving me information about the registered applications with this code:
#Autowired
private DiscoveryClient discoveryClient;
#RequestMapping(value = "/services", produces = MediaType.APPLICATION_JSON)
public ResponseEntity<ResourceSupport> applications() {
ResourceSupport resource = new ResourceSupport();
Set<String> regions = discoveryClient.getAllKnownRegions();
for (String region : regions) {
Applications allApps = discoveryClient.getApplicationsForARegion(region);
List<Application> registeredApps = allApps.getRegisteredApplications();
Iterator<Application> it = registeredApps.iterator();
while (it.hasNext()) {
Application app = it.next();
List<InstanceInfo> instancesInfos = app.getInstances();
if (instancesInfos != null && !instancesInfos.isEmpty()) {
//only show one of the instances
InstanceInfo info = instancesInfos.get(0);
resource.add(new Link(info.getHomePageUrl(), "urls"));
}
}
}
return new ResponseEntity<ResourceSupport>(resource, HttpStatus.OK);
}
The second Eureka Client register/deregister itself every 5 minutes:
private static final long EUREKA_INTERVAL = 5 * 60000;
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(MyServiceApplication.class);
long currentTime = System.currentTimeMillis();
long lastToggleTime = System.currentTimeMillis();
boolean connected = true;
while (true) {
if (currentTime - lastToggleTime > EUREKA_INTERVAL) {
if (connected) {
System.err.println("disconnect");
DiscoveryManager.getInstance().shutdownComponent();
connected = false;
lastToggleTime = System.currentTimeMillis();
}
else {
System.err.println("connect");
DiscoveryManager.getInstance().initComponent(
DiscoveryManager.getInstance().getEurekaInstanceConfig(),
DiscoveryManager.getInstance().getEurekaClientConfig());
connected = true;
lastToggleTime = System.currentTimeMillis();
}
}
currentTime = System.currentTimeMillis();
}
}
The log output of the second Eureka client looks as follows:
disconnect
2015-03-26 13:59:23.713 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYAPPNAME/MYMEGAHOSTNAME - deregister status: 200
connect
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Disable delta property : false
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Application is null : false
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
2015-03-26 14:04:23.870 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Application version is -1: true
2015-03-26 14:04:23.889 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2015-03-26 14:04:23.892 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : The response status is 200
2015-03-26 14:04:23.894 INFO 3452 --- [ main] com.netflix.discovery.DiscoveryClient : Starting heartbeat executor: renew interval is: 30
2015-03-26 14:04:53.916 INFO 3452 --- [ool-11-thread-1] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYAPPNAME/MYMEGAHOSTNAME - Re-registering apps/MYAPPNAME
2015-03-26 14:04:53.916 INFO 3452 --- [ool-11-thread-1] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYAPPNAME/MYMEGAHOSTNAME: registering service...
2015-03-26 14:04:53.946 INFO 3452 --- [ool-11-thread-1] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYAPPNAME/MYMEGAHOSTNAME - registration status: 204
When starting both Eureka clients for the first time, this works well. The second client is shown by the first client AND the second client is visible in die Eureka server console.
When the second client disconnects itself from the Eureka server, it is no longer listed there and the first client is also not showing it anymore.
Unfortunately, when the second client reconnects to the Eureka server, the Eureka server console is just showing a big red highlighted "DOWN (1)" and the first client is not showing the second client anymore. What am I missing here?
Solution:
Based on Dave Sayer's answer my solution was to add a custom #Configuration that has the EurekaDiscoveryClientConfiguration autowired and starts a thread for toggling the registration. Note that this is for test purposes only, so it may be a quite ugly solution ;-)
#Configuration
static public class MyDiscoveryClientConfigServiceAutoConfiguration {
#Autowired
private EurekaDiscoveryClientConfiguration lifecycle;
#PostConstruct
public void init() {
new Thread(new Runnable() {
#Override
public void run() {
long currentTime = System.currentTimeMillis();
long lastToggleTime = System.currentTimeMillis();
boolean connected = true;
while (true) {
if (currentTime - lastToggleTime > EUREKA_INTERVAL) {
if (connected) {
System.err.println("disconnect");
lifecycle.stop();
DiscoveryManager.getInstance().getDiscoveryClient().shutdown();
connected = false;
lastToggleTime = System.currentTimeMillis();
}
else {
System.err.println("connect");
DiscoveryManager.getInstance().initComponent(
DiscoveryManager.getInstance().getEurekaInstanceConfig(),
DiscoveryManager.getInstance().getEurekaClientConfig());
lifecycle.start();
connected = true;
lastToggleTime = System.currentTimeMillis();
}
}
currentTime = System.currentTimeMillis();
}
}
}).start();
}
}
Your call to DiscoveryManager.getInstance().initComponent() does not set the status (and the default is DOWN). In Spring Cloud we handle it in a special EurekaDiscoveryClientConfiguration.start() lifecycle. You could inject that and re-use it like this:
#Autowired
private EurekaDiscoveryClientConfiguration lifecycle;
#PostConstruct
public void init() {
this.lifecycle.stop();
if (DiscoveryManager.getInstance().getDiscoveryClient() != null) {
DiscoveryManager.getInstance().getDiscoveryClient().shutdown();
}
ApplicationInfoManager.getInstance().initComponent(this.instanceConfig);
DiscoveryManager.getInstance().initComponent(this.instanceConfig,
this.clientConfig);
this.lifecycle.start();
}
(which is code taken from here: https://github.com/spring-cloud/spring-cloud-netflix/blob/master/spring-cloud-netflix-core/src/main/java/org/springframework/cloud/netflix/config/DiscoveryClientConfigServiceAutoConfiguration.java#L58).

Resources