We are looking at leveraging spring cloud sleuth for distributed tracing and we've worked on a POC. It seems like a great solution, works out of the box.
Had a follow up question though:
We use random UUID vs 64 bit ids as trace id. We understand that custom headers (A new trace Id for example) can be added along with sleuth headers but would it be possible to override the default trace id format for slueth? We have looked through the documentation and perhaps Propagation is
the way to go. Can someone who has done this point us in the right direction and to some examples if possible. The help would be much appreciated.
We are using the latest release 2.0.1 which uses the brave library.
Any help/pointers would be greatly appreciated.
Thanks,
GK
Spring-sleuth doesn't provide a way to override the default ID's. According to OpenZipkin, 'Trace identifiers are 64 or 128-bit, but all span identifiers within a trace are 64-bit. All identifiers are opaque'
Refer this:
https://github.com/openzipkin/b3-propagation#identifiers
So, You can either put the generated request ID as Tag ('tag':'requestID') or You can place generate UID in a different field and use propagation technique. Refer ExtraFieldPropagationTest for reference.
https://github.com/openzipkin/brave/blob/master/brave/src/test/java/brave/propagation/ExtraFieldPropagationTest.java
Even though this is not possible (afaik), if your use case is to use custom headers for log correlation, all that's needed is setting these properties (Related SO Answer):
# To add request-id (to MDC?) via sleuth
spring.sleuth.baggage.correlation-fields=x-request-id
spring.sleuth.baggage.remote-fields=x-request-id
And then this can be used in your logging pattern:
%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%X{traceId:-},%X{spanId:-},%X{x-request-id:-}] [%thread] %logger{40} : %msg%n
Now along with the built-in traceId & spanId, the value of the header x-request-id will also be logged:
2022-06-28 19:55:40.071 WARN [8add19deba73c0f3,cda65c8122e5e025,some-request-id] [reactor-http-epoll-8] c.i.p.g.s.MyService : My warn log
To make this more concise, you can skip traceId & spanId if not required. A better way could have been to use them as a fallback when your own custom correlation header is not available, but logback currently does not (and probably will not) support nesting default values for MDC.
What you can do is to generate the id in a different field and propagate it further on. Check this part of the documentation https://cloud.spring.io/spring-cloud-static/Finchley.SR1/single/spring-cloud.html#_propagating_extra_fields
52.1 Propagating extra fields Sometimes you need to propagate extra fields, such as a request ID or an alternate trace context. For
example, if you are in a Cloud Foundry environment, you might want to
pass the request ID, as shown in the following example:
// when you initialize the builder, define the extra field you want to
propagate Tracing.newBuilder().propagationFactory(
ExtraFieldPropagation.newFactory(B3Propagation.FACTORY,
"x-vcap-request-id") );
// later, you can tag that request ID or use it in log correlation
requestId = ExtraFieldPropagation.get("x-vcap-request-id"); You may
also need to propagate a trace context that you are not using. For
example, you may be in an Amazon Web Services environment but not be
reporting data to X-Ray. To ensure X-Ray can co-exist correctly,
pass-through its tracing header, as shown in the following example:
tracingBuilder.propagationFactory(
ExtraFieldPropagation.newFactory(B3Propagation.FACTORY,
"x-amzn-trace-id") ); [Tip] In Spring Cloud Sleuth all elements of the
tracing builder Tracing.newBuilder() are defined as beans. So if you
want to pass a custom PropagationFactory, it’s enough for you to
create a bean of that type and we will set it in the Tracing bean.
Related
So the question: i have the following Json:
{
"type": "ipv4",
"value": "1.2.3.4",
"firstSeen": "2020-07-10 15:00:00.000",
"totalCount": 8
}
i need to create a spring boot microservice from it ,with the following restrictions:
TotalCount cannot be less than 0 and cannot be more than 100.
firstSeen date should ALWAYS be converted to ISO 8601 format. The user can enter the date
in any string format. Return error if it is not well formed.
Expose the following RESTful APIs
Create a new record (as shown above, id auto-generated)
Get record by value
as this is my first time working with microservice,i can not understand this problem,is there anyone can help me with this please?
You will need to create a basic Spring Boot project using Spring Initializer, if you are using Intellij you can use this link as reference https://www.jetbrains.com/help/idea/your-first-spring-application.html#create-new-spring-boot-project.
Then add a new controller method which accepts a Json Request. Since you are trying to create a new record, I suggest you use POST method. Json Request will accept the 4 input parameters you mentioned. This is very basic and you should be able to find it in pretty much any Spring boot tutorial online. you can refer for example https://dzone.com/articles/simple-spring-boot-post
This Json Request can have validator annotations which check for the criteria you give. For example you can have #Size(min=0, max=100, message="TotalCount cannot be less than 0 and cannot be more than 100"). https://www.baeldung.com/jpa-size-length-column-differences
For date you might need to write a custom validator to check the specific format you want. For creating a record I guess you mean adding it to database. here you can configure your database using the yaml file, again there are lot of online resources on how to configure a Database in your spring boot project. https://blog.tericcabrel.com/write-custom-validator-for-body-request-in-spring-boot/
Since its your first time, it might take a while to figure out various details but I assure you once you get a hold of it, its going to be easy.
My task is to implement a webservice that:
consumes an XML file on a POST endpoint
in happy flow, it returns a DTO as JSON + HTTP 2xx
the incoming XML file is validated against a XSD; if the validation fails, a JSON with a list of all validation errors is returned (including the line, column, error) with HTTP Bad request
the application exposes two endpoints, only one of them should be validated
I have started the implementation with Spring Boot + web, using regular #PostMapping which has "consumes" and "produces" set to application/xml and application/json, respectively. The usual flow works perfectly fine. Now, I stumbled upon the issue of validating the incoming payload. What I figured out:
1) I have to validate the payload before it is converted (marshalled) to an object.
2) Once validated, I have to either:
allow further processing
stop any further processing, write the error object to the response and set the status code to 400 Bad request
My approaches were:
1) using a RequestBodyAdvice, more specifically the beforeBodyRead method implementation. I had the following issue here: I don't know how to write anything to the output in case the validation fails.
2) using a Filter (I've extended OncePerRequestFilter) - fortunately, I can read the request (request.getInputStream()) and write to the response (response.getOutputStream()).
However, how can I do the selective filtering (as mentioned, I only want to validate one single endpoint)?
Are there any other alternatives for placing the incoming request XSD validation? Is spring-web the appropriate choice here? Would you recommend some other library / framework?
To validate xml against xsd schema, my preference is XML Beans. It is very easy to use. Other options are JABX, Castor. Take a look at Java to XML conversions?.
You will need to jar using xsd schmema and will need to put it in the classpath of your application so that it's classes are available for you for validation. Please take a look at this blog.
You can use validation API as mentioned here.
I would prefer to write validation code in the aspect so that it can be reused with other APIs.
If validation fails, throw valid exception from the aspect itself.
If validation is passed, process your input string that you receive.
Please let us know if you need any more information.
I need to know how to update the values in nifi processors using Rest API.
https://nifi.apache.org/docs/nifi-docs/rest-api/index.html
For example: I have used below processor structure
GetFile>SplitText>ExtractText>ReplaceText>ConvertJSONToSQL>PUTSQL.
I have passed following inputs for above processors.,
FileLocation(GetFile).
validation(ExtractText).
ReplacementValue(ReplaceText).
DBCP ConnectionPool,username and pwd for SQL.
I just need to use nifi rest api client to write above inputs into processors.
For example : If I give Processor name and input file in Rest API Client then it will write into processor.
Please stop me if anything i'm doing wrong.
Help Appreciated and Tell me any other ways is possible?
Mahen,
You can issue a PUT request to /processors/{id} and provide the new value of the "Replacement Value" property. You'll need to provide JSON body in the request to do this, and you can see the structure by expanding the endpoint noted above on the documentation link you provided, then clicking ProcessorEntity > ProcessorDTO > ProcessorConfigDTO to see the pop-up dialogs with the element listing and examples. You can also quickly get the current values of the processor by issuing a GET request to /processors/{id}.
i am running a client server program in spring.
i am trying to implement SLF4J + Logback for logging.
Now the thing is my client (which in real life would be a device/sensor) will send me data in string format which contains various fields seperated by comma) exact pattern is like this : deviceID,DeviceName,DeviceLocation,TimeStamp,someValue
now what i want is to filter the message in Logback using deviceID and then write the whole string to file which has name like device.log suppose for example 1,indyaah,Scranton,2011-8-10 12:00:00,34 should be logged in to file device1.log dynamically.
so how can i use evaluateFilter in logback/janino.
Thanks in advance.
Logback provides all the features you need out of the box. You need to learn about SiftingAppender and probably MDC.
SiftingAppender wraps several homogeneous appenders and picks single one per each logging message based on user-defined criteria (called distriminator). The documentation is pretty good, and it has some nice examples.
If I add this to my context:
<integration:message-history/>
I get a message header populated with the names (ids) of all the named components through which the message has passed.
But if I have a chain:
<integration:chain id="inboundChain" input-channel="inboundChannel">
<integration:transformer ref="myTransformer"/>
<integration:filter ref="myFilter"/>
<integration:router ref="myRouter"/>
</integration:chain>
I only get "inboundChain" in the list of components, as I can not add an id to the components nested in the chain.
Any way to get myTransformer etc into the message history?
Answer is no.
See spring forum post here