Spring Cloud Sleuth different trace-ID integrate with Kafka - spring

I'm using Kafka for Asyng calls between microservices, and i'm using Spring Sleuth for logging. The logging is ok, but when there is a message from Microservice1 to Microservice2, the logging's messages have different Trace-ID. Don't they have to have the same trace-Id but a different SpanId? is there any special configuration?

Message headers by default will not be transported by Spring Cloud Kafka binder, you have to set it via spring.cloud.stream.kafka.binder.headers manually as described in the Spring Cloud Stream Reference Guide. And then check if those tracing related headers been sent properly.
You can set Zipkin headers as following in your application.yml:
spring:
cloud:
stream:
kafka:
binder:
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Span-Export
Or in your application.properties:
spring.cloud.stream.kafka.binder.headers[0]=X-B3-TraceId
spring.cloud.stream.kafka.binder.headers[1]=X-B3-SpanId
spring.cloud.stream.kafka.binder.headers[2]=B3-Sampled
spring.cloud.stream.kafka.binder.headers[3]=X-B3-ParentSpanId
spring.cloud.stream.kafka.binder.headers[4]=X-Span-Name
spring.cloud.stream.kafka.binder.headers[5]=X-Span-Export
Or in a comma-separated list:
spring.cloud.stream.kafka.binder.headers=X-B3-TraceId,X-B3-SpanId,B3-Sampled,\
X-B3-ParentSpanId,X-Span-Name,X-Span-Export

Related

Spring cloud Sleuth starts a new trace instead of continuing spans in a single trace

I have 4 spring-boot applications (A, B, C and D).
The lifecycle of a transaction is as follows :
Application A is a kafka streams application and it ultimately produces to a topic which is
consumed by Application B.
Application B then consumes from the topic using #KafkaListener, does some processing and then produces to IBMMQ queue using spring's jmsTemplate.
Application C which is a #JMSListener consumes from the above queue and produces to another
queue using spring's JMSTemplate.
Application D which is again a #JmsListener consumes from the above queue and then produces to a kafka topic, which the again consumed by Application A
Now for a single transaction I would expect a single trace across all four application, but instead I get
One Trace starting from application A to application B (where it produces to IBM MQ)
One trace starting from Application C and ending at Application A
I would have uploaded the pictures to show the zipkin spans, but for some reason I am not able to do so.
All the above applications are Spring boot applications and they utilize spring-cloud-sleuth for producing transactions traces. I am relying on spring boot's autoconfiguration and these are the properties that I have set in all the applications:
zipkin:
enabled: ${ZIPKIN_ENABLED:false}
sender:
type: kafka
baseUrl: ${ZIPKIN_URL:http://localhost:9411}
service:
name: ${spring.application.name}
sleuth:
messaging:
kafka:
enabled: true
jms:
enabled: true
I am not able to understand what's exactly happening here. Why the spans are scattered across 2 traces and not one?
I am using spring-boot 2.3.3 and spring-cloud-dependencies Hoxton.SR8.
So it was application B which was not passing the header along. Turns out that the queue uri had a property targetClient which was set to 1. The uri is something like
queue:///DESTINATION_QUEUE?targetClient=1
Now I am not an IBM MQ expert by far, but the documentation states that setting this property to 1 means that Messages do not contain an MQRFH2 header. I toggled it to 0 and voila, all spans fall into place.

Spring Cloud Dataflow - Stream deployment stuck in "Deploying"

My Custom stream is stuck in "Deploying". But the stream is actually working. Messages are received by the sink. But the status in SCDF is "Deploying".
Per query - Spring Cloud Dataflow Custom App stuck in Deploying state #Sabby Anandan said, SCDF checks /health and /info. But per this post Spring boot actuator "/health" is not working the URL should be actuator/health. This is consistent with my code as well. http://localhost:1234/health does not work, but http://localhost:1234/actuator/health gives me {"status":"UP"}
This is a bug in SCDF? should it check for actuator/health.
Can you please help? If this is a bug, do I have any workaround?
Below are the version details:
-SCDF - spring-cloud-dataflow-server-2.5.1.BUILD-20200518.143034-16
-Skipper - spring-cloud-skipper-server-2.4.1.BUILD-20200518.094106-12
-Boot - 2.3.0

Spring Cloud Stream Kafka Binder and Spring Cloud Azure EventHub compatible version for Spring Boot >1.5.20

I have successfully used Spring Cloud Stream Kafka Binder (org.springframework.cloud:spring-cloud-starter-stream-kafka:3.0.1.RELEASE) and Spring Cloud Azure Event Hubs (com.microsoft.azure:spring-cloud-starter-azure-eventhubs:1.2.3) with Spring Boot 2.2.6 to Publish and Consume messages from Azure Event Hub (with Kafka API enabled).
However, when i try to integrate the same versions of the Spring Cloud libraries with Spring Boot 1.5.22, I am facing the issue java.lang.NoClassDefFoundError: org/springframework/integration/support/converter/ConfigurableCompositeMessageConverter
When i used spring-cloud-starter-stream-kafka:1.3.4.RELEASE and com.microsoft.azure:spring-cloud-starter-azure-eventhubs:1.1.0, I am getting zookeeper connection issues probably due to a different set of properties needed for configuration
2020-04-29 17:01:43.104 INFO 81976 --- [localhost:2181)] [org.apache.zookeeper.ClientCnxn ] [-] [-] : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-04-29 17:01:43.107 WARN 81976 --- [localhost:2181)] [org.apache.zookeeper.ClientCnxn ] [-] [-] : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
My application.yaml looks like below
spring:
cloud:
azure:
auto-create-resources: true
credential-file-path: my.azureauth
eventhub:
checkpoint-storage-account: azuremigrationv2
namespace: somenamespace
region: Central US
resource-group: some-rg
stream:
bindings:
consumer:
destination: event-hub-1
group: testconsumergroup
content-type: application/json
nativeEncoding: true
consumer:
concurrency: 1
valueSerde: JsonSerde
requeue-rejected: true
I would like to know which versions of the libraries i am using are compatible with Spring Boot >= 1.5.20
My understanding is that config properties within spring.cloud.azure are for the dependency azure-spring-cloud-starter-eventhubs and the config properties within spring.cloud.stream are for use with azure-spring-cloud-stream-binder-eventhubs . I could be wrong about this but I think it is correct. There is no good documentation on it that I could find and it makes things quite confusing if you don't know the difference. It caused me to waste a day or two doing POCs until I started understanding it.
NOTE: When configuring binders, spring.cloud.stream contains spring.cloud.azure subkeys. Link

Spring Boot and Kafka: Broker disconnected

I have setup a Spring Boot application to receive Kafka messages from an existing and working Kafka producer. The setup is standard, and based on the following: https://www.codenotfound.com/spring-kafka-consumer-producer-example.html
Messages are not received, and the following is continually displayed in the console:
WARN org.apache.clients.NetworkClient :Bootstrap broker <hostname>:9092 disconnected
In addition, the following debug message is logged:
org.apache.common.errors.Timeout: Failed to update metadata after 60000 ms.
The console message is discussed in the following link:
https://community.hortonworks.com/content/supportkb/150148/errorwarn-bootstrap-broker-6668-disconnected-orgap.html
The logged message is discussed here:
https://community.cloudera.com/t5/Data-Ingestion-Integration/Error-when-sending-message-to-topic-in-Kafka/td-p/41440
Very likely, the timeout will not happen when the first issue is resolved.
The solution to the console message which is given is to explicitly pass --security-protocol SSL as an argument to the producer or consumer command.
Given that I am listening on an existing Kafka broker and topic, no settings can be changed there. Any changes must be on the Spring Boot side.
Is it possible to configure application.yml so that --security-protocol SSL is passed an an argument to the consumer? Also, has anyone experienced this before, and is there another way to resolve the issue using the configuration options available in Spring Boot and Spring Kafka?
Thanks
See the documentation.
Scroll down to Kafka. Arbitrary Kafka properties can be set using
spring:
kafka:
properties:
security.protocol: SSL
applies to consumer and producer (and admin in 2.0).
In the upcoming 2.0 release (currently RC1), there is also
spring:
kafka:
properties:
consumer:
some.property: foo
for properties that only apply to consumers (and similarly for producers and admins).

How to connect to Kafka Mesos Framework from an application using Spring Cloud Stream?

Having a Mesos-Marathon cluster in place and a Spring Boot application with Spring Cloud Stream that consumes a topic from Kafka, we now want to integrate Kafka with the Mesos cluster. For this we want to install Kafka Mesos Framework.
Right now we have the application.yml configuration like this:
---
spring:
profiles: local-docker
cloud:
stream:
kafka:
binder:
zk-nodes: 192.168.88.188
brokers: 192.168.88.188
....
Once we have installed Kafka Mesos Framework,
How can we connect to kafka from Spring Cloud Stream?
or more specifically
How will be the configuration?
The configuration properties look good. Do you have the host addresses correct.
For more info on the kafka binder config properties, you can refer here:
https://github.com/spring-cloud/spring-cloud-stream/blob/master/spring-cloud-stream-docs/src/main/asciidoc/spring-cloud-stream-overview.adoc#kafka-specific-settings

Resources