jsonPayload logging.googleapis.com/trace_sampled not mapping to traceSampled - google-cloud-stackdriver

I have a custom logging where I am using structured logging with JSON and I could map the trace and spanId fields but trying to do the same with traceSampled didn't work. The field appears as "logging.googleapis.com/trace_sampled" in the jsonPayload object. I did this as per
https://cloud.google.com/logging/docs/agent/configuration#special-fields
How do I get traceSampled working with structured logging?

Have you considered using Stackdriver Trace? It seems like it would get the info you're looking for. https://cloud.google.com/trace/

Related

Overriding Spring cloud sleuth Trace Id format

We are looking at leveraging spring cloud sleuth for distributed tracing and we've worked on a POC. It seems like a great solution, works out of the box.
Had a follow up question though:
We use random UUID vs 64 bit ids as trace id. We understand that custom headers (A new trace Id for example) can be added along with sleuth headers but would it be possible to override the default trace id format for slueth? We have looked through the documentation and perhaps Propagation is
the way to go. Can someone who has done this point us in the right direction and to some examples if possible. The help would be much appreciated.
We are using the latest release 2.0.1 which uses the brave library.
Any help/pointers would be greatly appreciated.
Thanks,
GK
Spring-sleuth doesn't provide a way to override the default ID's. According to OpenZipkin, 'Trace identifiers are 64 or 128-bit, but all span identifiers within a trace are 64-bit. All identifiers are opaque'
Refer this:
https://github.com/openzipkin/b3-propagation#identifiers
So, You can either put the generated request ID as Tag ('tag':'requestID') or You can place generate UID in a different field and use propagation technique. Refer ExtraFieldPropagationTest for reference.
https://github.com/openzipkin/brave/blob/master/brave/src/test/java/brave/propagation/ExtraFieldPropagationTest.java
Even though this is not possible (afaik), if your use case is to use custom headers for log correlation, all that's needed is setting these properties (Related SO Answer):
# To add request-id (to MDC?) via sleuth
spring.sleuth.baggage.correlation-fields=x-request-id
spring.sleuth.baggage.remote-fields=x-request-id
And then this can be used in your logging pattern:
%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%X{traceId:-},%X{spanId:-},%X{x-request-id:-}] [%thread] %logger{40} : %msg%n
Now along with the built-in traceId & spanId, the value of the header x-request-id will also be logged:
2022-06-28 19:55:40.071 WARN [8add19deba73c0f3,cda65c8122e5e025,some-request-id] [reactor-http-epoll-8] c.i.p.g.s.MyService : My warn log
To make this more concise, you can skip traceId & spanId if not required. A better way could have been to use them as a fallback when your own custom correlation header is not available, but logback currently does not (and probably will not) support nesting default values for MDC.
What you can do is to generate the id in a different field and propagate it further on. Check this part of the documentation https://cloud.spring.io/spring-cloud-static/Finchley.SR1/single/spring-cloud.html#_propagating_extra_fields
52.1 Propagating extra fields Sometimes you need to propagate extra fields, such as a request ID or an alternate trace context. For
example, if you are in a Cloud Foundry environment, you might want to
pass the request ID, as shown in the following example:
// when you initialize the builder, define the extra field you want to
propagate Tracing.newBuilder().propagationFactory(
ExtraFieldPropagation.newFactory(B3Propagation.FACTORY,
"x-vcap-request-id") );
// later, you can tag that request ID or use it in log correlation
requestId = ExtraFieldPropagation.get("x-vcap-request-id"); You may
also need to propagate a trace context that you are not using. For
example, you may be in an Amazon Web Services environment but not be
reporting data to X-Ray. To ensure X-Ray can co-exist correctly,
pass-through its tracing header, as shown in the following example:
tracingBuilder.propagationFactory(
ExtraFieldPropagation.newFactory(B3Propagation.FACTORY,
"x-amzn-trace-id") ); [Tip] In Spring Cloud Sleuth all elements of the
tracing builder Tracing.newBuilder() are defined as beans. So if you
want to pass a custom PropagationFactory, it’s enough for you to
create a bean of that type and we will set it in the Tracing bean.

access fields from log using ruby filter

What I am trying to do is pass my grok fields in some way or another to an external ruby filter-script and set based on these fields specific tags. The problem is that I can only get the whole log message with the event API.
My question is: is it possible to access fields from the already processed log message in the ruby filter or do I have to parse the whole message myself, which would not be optimal because every log message is processed twice? Alternatively I could completely dump the grok filter and do everything myself in the script.
Yes, it is possible.
You can get read-only access to any field using Event API
filter {
ruby {
code => 'event.get("foo" )'
}
}
field can also be a nested field reference such as [field][bar].
event.get("[foo][bar]")

How to log a error to Stackdriver Error Reporting via Stackdriver Logging

I am trying to log an error to Stackdriver Error Reporting in Go. On the first page of the Error Reporting, there is stated "Reporting errors from your application can be achieved by logging application errors to Google Stackdriver Logging or..." (https://cloud.google.com/error-reporting/docs/). How do I do that with the Go client libraries?
The Entry provided by the logging library is constructed like this:
github.com/GoogleCloudPlatform/.../logging.go#L412
type Entry struct {
Timestamp time.Time
Severity Severity
Payload interface{}
Labels map[string]string
InsertID string
HTTPRequest *HTTPRequest
Operation *logpb.LogEntryOperation
LogName string
Resource *mrpb.MonitoredResource
}
Do I need to marshal this JSON structure into the Payload? Or can I insert the stacktrace as string?
There is a dedicated Go package that should help you achieve this: import "cloud.google.com/go/errorreporting"
You can configure it to report errors via Stackdriver Logging, and it will take care of the sending the correct log structure.
From the docs:
// Payload must be either a string or something that
// marshals via the encoding/json package to a JSON object
// (and not any other type of JSON value).
Looks like inserting stacktrace as a string is the way to go.

How to add metadata to the document using marklogic mapreduce connector api

I wanted to write the document to marklogic database using marklogic mapreduce api, lets say here is the example. I wanted to add metadata to the document which i am writing it back to the marklogic database in the reducer -
context.write(outputURI, result);
If adding metadata to the document with mapreduce api of marklogic is possible please let me know.
For Metadata, I am assuming you are talking about the document properties fragment. For background on document properties, please see here: https://docs.marklogic.com/guide/app-dev/properties#id_19516
For use in MarkLogic mapreduce, please see here (the output classes):
https://docs.marklogic.com/guide/mapreduce/output#id_76625
I believe you need to extend/modify your example to also write content to the properties fragment using the PropertyOutputFormat class.
One of the sample applications in the same documentation is an example of saving content in the properties fragment. If, however, you would like to fast-track yourself by looking at some source code: see some examples - including writing to a document property fragment, see here: https://gist.github.com/evanlenz/2484318 - specifically LinkCountInProperty.java
Used property mapreduce.marklogic.output.content.collection with the configuration xml. Adding this property added inserted data to that collection.

How to use SLF4J to log to two different files based on type of msg..?

i am running a client server program in spring.
i am trying to implement SLF4J + Logback for logging.
Now the thing is my client (which in real life would be a device/sensor) will send me data in string format which contains various fields seperated by comma) exact pattern is like this : deviceID,DeviceName,DeviceLocation,TimeStamp,someValue
now what i want is to filter the message in Logback using deviceID and then write the whole string to file which has name like device.log suppose for example 1,indyaah,Scranton,2011-8-10 12:00:00,34 should be logged in to file device1.log dynamically.
so how can i use evaluateFilter in logback/janino.
Thanks in advance.
Logback provides all the features you need out of the box. You need to learn about SiftingAppender and probably MDC.
SiftingAppender wraps several homogeneous appenders and picks single one per each logging message based on user-defined criteria (called distriminator). The documentation is pretty good, and it has some nice examples.

Resources