Spring cloud Sleuth starts a new trace instead of continuing spans in a single trace - spring-boot

I have 4 spring-boot applications (A, B, C and D).
The lifecycle of a transaction is as follows :
Application A is a kafka streams application and it ultimately produces to a topic which is
consumed by Application B.
Application B then consumes from the topic using #KafkaListener, does some processing and then produces to IBMMQ queue using spring's jmsTemplate.
Application C which is a #JMSListener consumes from the above queue and produces to another
queue using spring's JMSTemplate.
Application D which is again a #JmsListener consumes from the above queue and then produces to a kafka topic, which the again consumed by Application A
Now for a single transaction I would expect a single trace across all four application, but instead I get
One Trace starting from application A to application B (where it produces to IBM MQ)
One trace starting from Application C and ending at Application A
I would have uploaded the pictures to show the zipkin spans, but for some reason I am not able to do so.
All the above applications are Spring boot applications and they utilize spring-cloud-sleuth for producing transactions traces. I am relying on spring boot's autoconfiguration and these are the properties that I have set in all the applications:
zipkin:
enabled: ${ZIPKIN_ENABLED:false}
sender:
type: kafka
baseUrl: ${ZIPKIN_URL:http://localhost:9411}
service:
name: ${spring.application.name}
sleuth:
messaging:
kafka:
enabled: true
jms:
enabled: true
I am not able to understand what's exactly happening here. Why the spans are scattered across 2 traces and not one?
I am using spring-boot 2.3.3 and spring-cloud-dependencies Hoxton.SR8.

So it was application B which was not passing the header along. Turns out that the queue uri had a property targetClient which was set to 1. The uri is something like
queue:///DESTINATION_QUEUE?targetClient=1
Now I am not an IBM MQ expert by far, but the documentation states that setting this property to 1 means that Messages do not contain an MQRFH2 header. I toggled it to 0 and voila, all spans fall into place.

Related

Liveness/Readiness set of health indicators for Spring Boot service running on top of Kafka Streams

How health indicators should be properly configured for Spring Boot service running on top of Kafka Streams with DB connection? We use Spring Cloud Streams and Kafka Streams binding, Spring-Data JPA, Kubernetes as a container hypervisor. We have let say 3 service replicas and 9 partitions for each topic. A typical service usually joins messages from two topics and persist data in a database and publish data back to another kafka topic.
After switching to Spring Boot 2.3.1 and changing K8s liveness/readiness endpoints to the new ones:
/actuator/health/liveness
/actuator/health/readiness
we discovered that by default they do not have any health indicators included.
According to documentation:
Actuator configures the "liveness" and "readiness" probes as Health
Groups; this means that all the Health Groups features are available
for them. (...) By default, Spring Boot does not add other Health
Indicators to these groups.
I believe that this is the right approach, but I have not tested that:
management.endpoint.health.group.readiness.include: readinessState,db,binders
management.endpoint.health.group.liveness.include: livenessState,ping,diskSpace
We try to cover the following use cases:
rolling update: not available consumption slot (idle instance) when new replica is added
stream has died (runtime exception has been thrown)
DB is not available during container start up / when service is running
broker is not available
I have found a similar question, however I believe the current one is specifically related to Kafka services. They are different in it's nature from REST services.
Update:
In spring boot 2.3.1 binders health indicator checks if streams are in RUNNING or REBALANCING state for Kafka 2.5 (before only RUNNING), so I guess that rolling update case with idle instance is handled by its logic.

How to do replication with Spring Boot and ActiveMQ Artemis?

I am looking for a structure or solution that can support spring boot microservices with multiple instances, ActiveMQ Artemis and Apache Camel.
For example:
I have an ActiveMQ Artemis instance and a Spring Boot JMS consumer with instance A (on machine A) and instance B (on machine B).
Both instances (A,B) are up, but by default the instance A is the master consumer, I mean must consume the JMS message and only in case of it's down or it throw some exceptions, the instance B start consuming messages and when A is OK then it take the ball.
Nb: Instance A and B of the Spring Boot microservice are on different machine and in my case i don't have any container like docker etc...
Have you any approach to solve this issue.
I think the closest you could get to the functionality you want is by using the "exclusive queue" feature. Both consumers A & B can be active at the same time, but the broker will only send messages to one of them. If the consumer which the broker has chosen goes away for whatever reason then the broker will choose another consumer.

Spring Boot and Kafka: Broker disconnected

I have setup a Spring Boot application to receive Kafka messages from an existing and working Kafka producer. The setup is standard, and based on the following: https://www.codenotfound.com/spring-kafka-consumer-producer-example.html
Messages are not received, and the following is continually displayed in the console:
WARN org.apache.clients.NetworkClient :Bootstrap broker <hostname>:9092 disconnected
In addition, the following debug message is logged:
org.apache.common.errors.Timeout: Failed to update metadata after 60000 ms.
The console message is discussed in the following link:
https://community.hortonworks.com/content/supportkb/150148/errorwarn-bootstrap-broker-6668-disconnected-orgap.html
The logged message is discussed here:
https://community.cloudera.com/t5/Data-Ingestion-Integration/Error-when-sending-message-to-topic-in-Kafka/td-p/41440
Very likely, the timeout will not happen when the first issue is resolved.
The solution to the console message which is given is to explicitly pass --security-protocol SSL as an argument to the producer or consumer command.
Given that I am listening on an existing Kafka broker and topic, no settings can be changed there. Any changes must be on the Spring Boot side.
Is it possible to configure application.yml so that --security-protocol SSL is passed an an argument to the consumer? Also, has anyone experienced this before, and is there another way to resolve the issue using the configuration options available in Spring Boot and Spring Kafka?
Thanks
See the documentation.
Scroll down to Kafka. Arbitrary Kafka properties can be set using
spring:
kafka:
properties:
security.protocol: SSL
applies to consumer and producer (and admin in 2.0).
In the upcoming 2.0 release (currently RC1), there is also
spring:
kafka:
properties:
consumer:
some.property: foo
for properties that only apply to consumers (and similarly for producers and admins).

Spring Boot Micro Service Tracing Options

I am having below requirement for which is there any open source library will cover all of them.
1.We are building a distributed micro service architecture with Spring Boot.Which includes more than 100 micro services.
2.There is a lot if inter micro service communications possible to achieve single transaction.
3.We want to trace every micro service call and the trace should provide following information.
a.Transaction ID/Trace ID
b. Back end transaction status-HTTP status for REST.Like wise for SOAP as well.
c.Time taken for that call.
d.Request and Response payload.
Currently we are achieving this using indigenous tracing frame work.Is there any open source project will handle all this without any coding from developer.I know we have few options with spring Boot Cloud Zipkin,Seluth etc does this handle above requirements.
My project has similar requirements to yours. IMHO, Spring-cloud-sleuth + Zipkin work well in my case.
For any inter microservices communication, we are using Kafka, and Spring-cloud-sleuth + zipkin has no problem to trace all the call, from REST -> Kafka -> More Kafka -> REST.
To enable Kafka Tracing, just simply add
spring:
sleuth:
propagation-keys: some-key
sampler:
probability: 1
messaging:
kafka:
enabled: true
We are also using Azure ApplicationInsights to do centralized logging, which is well integrated with Spring Cloud.
Hope above give you some confidence of using Sleuth + Zipkin.

Does Spring XD re-process the same message when one of it's container goes down while processing the message?

Application Data Flow:
JSon Messages--> Active MQ --> Spring XD-- Business Login(Transform JSon to Java Object)--> Save Data to Target DB--> DB.
Question:
Sprin-Xd is running in cluster mode, configured with Radis.
Spring XD picks up the message from the Active message queue(AMQ). So message is no longer in AMQ. Now while one of the containers where this message is being processed with some business logic suddenly goes down. In this scenarios-
Will Spring-XD framework automatically re-process that particular message ? what's mechanism behind that?
Thanks,
Abhi
Not with a Redis transport; Redis has no infrastructure to support such a requirement ("transactional" reads). You would need to use a rabbit or kafka transport.
EDIT:
See Application Configuration (scroll down to RabbitMQ) and Message Bus Configuration.
Specifically, the default ackMode is AUTO which means messages are acknowledged on success.

Resources