I am facing a hard time finding a solution for a problem I am facing which is:
Let's say I have 4 microservices (A,B,C,D) that interact with each other (rest APIs)
A calls B and B calls D. So that path for a single request is A/B/D
Below is the logging pattern.
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } [%15.15t] %logger{10}:%L | %m%n
I want to add path in it.
So lets say Request is initiated from A to B, in the logs of B I want it to display Path:A/
now B calls D, the Path should be in the logs of D: A/B
Please suggest how do I manage this.
I am sorry for naïve question since I am new to SLF4J
as I understand you want to know the path your request took!!! for that you can use Spring Cloud Sleuth provides Spring Boot auto-configuration for distributed tracing
https://developer.okta.com/blog/2021/07/26/spring-cloud-sleuth
Related
Here's my use case:
I have 1 DB and 1 set of tables for quartz
I have a spring boot application X which have N nodes (VMs) and job class A, B and C
I have another spring boot application Y which have M nodes and job class D, E and F
How should I define application.properties for each application to make sure the quartz schedule the job to the right clusters? (job A,B,C goes to N and job D,E,F goes to M)
I guess this should be required for both applications:
spring.quartz.properties.org.quartz.jobStore.isClustered=true
But I'm not sure which of below property make the call, or I need define both for each application:
spring.quartz.properties.org.quartz.scheduler.instanceName=???
spring.quartz.properties.org.quartz.scheduler.instanceId=???
Thanks in advance!
Im working with very large files and using Spring Integration to process them. I want to know what is the best and most efficient way to handle them using Spring Integration and the provided DSL. I have a test CSV file that has around 30K records and am using the FileSplitter component to read in each line into memory and then splitting again based on the delimiter to get the columns that I need.
Code snippet below.
IntegrationFlows
.from(Files.inboundAdapter(new File(inputFilePath))
.filter(getFileFilters())
.autoCreateDirectory(true) ,
c -> c.poller(Pollers.fixedRate(1000))
)
.split(Files.splitter())
.channel(c -> c.executor(Executors.newWorkStealingPool()))
.handle((p, h) -> new MyColumnSelector().getCol((String) p, 1))
.split(s -> s.applySequence(true).delimiters(","))
.channel(c -> c.executor(Executors.newWorkStealingPool()))
.get()
The issue was the IDE and console logging overhead that was slowing things down. I tested this with the same file without any the IDE or any extra logging and it processed significantly faster.
Not able use TTL with spring data CassandraRepository based implementation.
Spring data cassandra version: Latest
I am trying to use TTL property of cassandra for save operation using spring data repository based implementation. however looking at the reference documentation(https://docs.spring.io/spring-data/cassandra/docs/current/reference/html/) i dont see any straight forward way of using it.
Even though docs mentioned that we can use it but no example provided for repository based implementation. Do note that i see some example using cqlTemplate and cassadraOperations. But none for repository.
No code written yet as I am trying to figure out how to use it
Expectation would be some kind #TTL(value in seconds) annotation on repository save/update method for easier implementation.
Refer to A Sarkar's answer from this post TTL support in spring boot application using spring-data-cassandra
Please see my sample code here, https://github.com/nontster/spring-data-cassandra-demo
I borrow sample code from this tutorial https://www.baeldung.com/spring-data-cassandra-tutorial
You need to create demo keyspace before you can run this code,
CREATE KEYSPACE demo WITH replication = {'class':'SimpleStrategy', 'replication_factor' : 1};
Run saveBookTest() method in BookRepositoryIntegrationTest.java and you can see countdown TTL in column via (I set TTL to 600 seconds)
cqlsh:demo> SELECT title,TTL(year) FROM Book WHERE title='Head First Java' AND publisher='O''Reilly Media';
title | ttl(year)
-----------------+-----------
Head First Java | 597
(1 rows)
I'm new to using Jaeger tracing system and have been trying to implement it for a flask based microservices architecture. Below is my jaeger client config implemented in python:
config = Config(
config = {
'sampler': {
'type': 'const',
'param': 1,
},
'logging': True,
'reporter_batch_size': 1,
},
service_name=service,
)
I read somewhere that Sampling strategy is being used to sample the number of traces especially for the trace which doesn't have any metadata. So as per this config, does it mean that I'm sampling each and every trace or just the few traces randomly? Mysteriously, when I'm passing random inputs to create spans for my microservices, the spans are getting generated only after 4 to 5 minutes. I would like to understand this configuration spec more but not able to.
So as per this config, does it mean that I'm sampling each and every trace or just the few traces randomly?
Using the sampler type as const with 1 as the value means that you are sampling everything.
Mysteriously, when I'm passing random inputs to create spans for my microservices, the spans are getting generated only after 4 to 5 minutes. I would like to understand this configuration spec more but not able to.
There are several things that might be happening. You might not be closing spans, for instance. I recommend reading the following two blog posts to try to understand what might be happening:
Help! Something is wrong with my Jaeger installation!
The life of a span
I've been searching on internet and ESQL/WebSphere MessageBroker documentations, to find a way of printing a variable's value so that i can trace it in Broker Logs. Like System.out.println() in java.
I can't debug the messageflow because of some technical issues, so could you please suggest me how to do it or any workarounds.
UserTrace is supposed to fulfil this role for ESQL but if UserTrace isn't helping then I see a lot of people use static calls out from ESQL to java whcih are then logged.
The java code could be as simple as writing to stdout (which will go in /var/mqsi/components//stdout) but more commonly I see this pattern used with existing java logging frameworks like log4J.
The advantage of this approach is that you unify logging between your JCN's and ESQL compute nodes.
User Trace should suffice your need, Put a trace node at the point where you want to log, select the file trace and give a file path,
pattern give as:
${CURRENT_TIMESTAMP}
${Root}
${Environment}
${LocalEnvironment}
${ExceptionList}
so it logs everything.
If it's in higher environments, then you have to use the mqsichangetrace command to enable the trace on flow.
Probably the easiest way is:
1) Set a temporary location in the Environment to the value of the variable: SET Environment.temp = yourVar ;
2) Subsequently, in a Trace node, set the Pattern on the Basic tab of the Trace node to that temporary location: ${Environment.temp}
3) Configure the Trace node to print to a File, User Trace, or the Local Error Log.
4) Deploy and run your flow. Then look in the output of the Trace node.