How to add trace id to each logs in go micro service - go

I wanted to add trace id to logging done for each request to the micro service.I want this in similar as for springboot application we can set trace id in MDC and fetch it and use it while logging.
I have done some research and I found that MDC equivalent in go lang is context. So, I have set the trace id in my context. Now the problem is where ever I have to log with trace id ,I need to pass context to that function which is very ugly way. I am looking for a better solution for this problem.
func HandlerFunction(f gin.HandlerFunc) gin.HandlerFunc{
    return func(cxt *gin.Context) {
reqraceId := cxt.Request.Header.Get("trace-id")
        requid , _ := uuid.NewRandom()
        if reqTraceId == "" {
            c.Request.Header.Set("trace-id", requid.String())
        }
        f(c)
    }
}

It might be worth reading up on context.Context particularly this article which has a section that says:
At Google, we require that Go programmers pass a Context parameter as the first argument to every function on the call path between incoming and outgoing requests.
TL;DR - it's fine to pass the context, but what's the best way?
There's two main patterns
Ask the context to give you a logger
Give the logger the context
Context can be used to store values:
context.WithValue(ctx, someKey, someValue)
This means we can either do:
somepackage.Log(ctx).Info("hello world")
// or
sompackage.Info(ctx, "hello world")
The implementation of these two sample APIs could interact with the context to retrieve the values required with out needing to worry about the extra information that would have been in MDC at any of the logging call sites.

From my side I found that using the default log package we could set a prefix as log.SetPrefix(traceId), doing so, the log will print the trace id as the prefix in the actual and sub-functions/structs.
import (
"log"
"github.com/google/uuid"
)
func (hdl *HTTPHandler) example() {
var traceId string = uuid.NewString()
log.SetPrefix(traceId + " - ")
log.SetFlags(log.LstdFlags)
// ...
// ...
log.Println("......")
}

This issue can also be solved using a dependency injection container.
We can implement "request-scoped" injections, and as a result, for each request, we will recreate all dependency tree that uses request-scoped dependency(logger, error reporter, clients which send requests to another service with context propagation).
But as I understood using dependency injection containers is not a best practice in go and not an "idiomatic" way.
Also, this approach can have some performance and memory issues since we will recreate objects for each request.

Related

GoLang web service - How to translate errors declared in package

I'm implementing server using GIN Framework.
The Gin framework has a handler for each route.
Each function handler has it's own controller that returns some result (basically error)
package controller
var (
ErrTooManyAttempts = errors.New("too many attempts")
ErrNoPermission = errors.New("no permission")
ErrNotAvailable = errors.New("not available")
)
This errors are created for developers and logging, so we need to beautify and translate these before sending them to the client.
To be able to get translation we should know the key of the message.
But the problem is I need to bind these errors in some map[error]string (key) to get translation key by error.
It's quite complex because I have to bind all the errors with the corresponding keys.
To improve it, I'd like to use reflection:
Get all variables in the package
Walk through and find Err prefix
Generate translation key based on package name and error name, ex: controller-ErrTooManyAttempts
Merge translation file, so it should look like this:
`
{
"controller-ErrTooManyAttempts": "Too many attempts. Please try again later",
"controller-ErrNoPermission": "Permission to perform this action is denied",
"controller-ErrNotAvailable": "Service not available. Please try again later"
}
`
What is the correct way to translate errors from the package? Is it possible to provide me with some example?

Can I store sensitive data in a Vert.x context in a Quarkus application?

I am looking for a place to store some request scoped attributes such as user id using a Quarkus request filter. I later want to retrieve these attributes in a Log handler and put them in the MDC logging context.
Is Vertx.currentContext() the right place to put such request attributes? Or can the properties I set on this context be read by other requests?
If this is not the right place to store such data, where would be the right place?
Yes ... and no :-D
Vertx.currentContext() can provide two type of objects:
root context shared between all the concurrent processing executed on this event loop (so do NOT share data)
duplicated contexts, which are local to the processing and its continuation (you can share in these)
In Quarkus 2.7.2, we have done a lot of work to improve our support of duplicated context. While before, they were only used for HTTP, they are now used for gRPC and #ConsumeEvent. Support for Kafka and AMQP is coming in Quarkus 2.8.
Also, in Quarkus 2.7.2, we introduced two new features that could be useful:
you cannot store data in a root context. We detect that for you and throw an UnsupportedOperationException. The reason is safety.
we introduced a new utility class ( io.smallrye.common.vertx.ContextLocals to access the context locals.
Here is a simple example:
AtomicInteger counter = new AtomicInteger();
public Uni<String> invoke() {
Context context = Vertx.currentContext();
ContextLocals.put("message", "hello");
ContextLocals.put("id", counter.incrementAndGet());
return invokeRemoteService()
// Switch back to our duplicated context:
.emitOn(runnable -> context.runOnContext(runnable))
.map(res -> {
// Can still access the context local data
String msg = ContextLocals.<String>get("message").orElseThrow();
Integer id = ContextLocals.<Integer>get("id").orElseThrow();
return "%s - %s - %d".formatted(res, msg, id);
});
}

How to handle weird API flow with implicit create step in custom terraform provider

Most terraform providers demand a predefined flow, Create/Read/Update/Delete/Exists
I am in a weird situation developing a provider against an API where this behavior diverges a bit.
There are two kinds of resources, Host and Scope. A host can have many scopes. Scopes are updated with configurations.
This generally fits well into the terraform flow, it has a full CRUDE flow possible - except for one instance.
When a new Host is made, it automatically has a default scope attached to it. It is always there, cannot be deleted etc.
I can't figure out how to have my provider gracefully handle this, as I would want the tf to treat it like any other resource, but it doesn't have an explicit CREATE/DELETE, only READ/UPDATE/EXISTS - but every other scope attached to the host would have CREATE/DELETE.
Importing is not an option due to density, requiring an import for every host would render the entire thing pointless.
I originally was going to attempt to split Scopes and Configurations into separate resources so one could be full-filled by the Host (the host providing the Scope ID for a configuration, and then other configurations can get their scope IDs from a scope resource)
However this approach falls apart because the API for both are the same, unless I wanted to add the abstraction of creating an empty scope then applying a configuration against it, which may not be fully supported. It would essentially be two resources controlling one resource which could lead to dramatic conflicts.
A paraphrased example of an execution I thought about implementing
resource "host" "test_integrations" {
name = "test.integrations.domain.com"
account_hash = "${local.integrationAccountHash}"
services = [40]
}
resource "configuration" "test_integrations_root_configuration" {
name = "root"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations.root_scope_id}"
hostnames = ["test.integrations.domain.com"]
}
resource "scope" "test_integrations_other" {
account_hash = "${local.integrationAccountHash}"
host_hash = "${host.test_integrations.id}"
path = "/non/root/path"
name = "Some Other URI Path"
}
resource "configuration" "test_integrations_other_configuration" {
name = "other"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations_other.id}"
}
In this example flow, a configuration and scope resource unfortunately are pointing to the same resource which I am worried would cause conflicts or confusion on who is responsible for what and dramatically confuses the create/delete lifecycle
But I can't figure out how the TF lifecycle would allow for a resource that would only UPDATE/READ/EXISTS if say a flag was given (and how state would handle that)
An alternative would be to just have a Configuration resource, but then if it was the root configuration it would need to skip create/delete as it is inherently tied to the host
Ideally I'd be able to handle this situation gracefully. I am trying to avoid including the root scope/configuration in the host definition as it would create a split in how they are written and handled.
The documentation for providers implies you can use a resource AS a schema object in a resource, but does not explain how or why. If it works the way I imagine it, it may work to create a resource that is only used to inject into the host perhaps - but I don't know if that is how it works and if it is how to accomplish it.
I believe I tentatively have found a solution after asking some folks on the gopher slack.
Using AWS Provider Default VPC as a reference, I can "clone" the resource into one with a custom Create/Delete lifecycle
Loose Example:
func defaultResourceConfiguration() *schema.Resource {
drc := resourceConfiguration()
drc.Create = resourceDefaultConfigurationCreate
drc.Delete = resourceDefaultConfigurationDelete
return drc
}
func resourceDefaultConfigurationCreate(d *schema.ResourceData, m interface{}) error {
// double check it exists and update the resource instead
return resourceConfigurationUpdate(d, m)
}
func resourceDefaultConfigurationDelete(d *schema.ResourceData, m interface{}) error {
log.Printf("[WARN] Cannot destroy Default Scope Configuration. Terraform will remove this resource from the state file, however resources may remain.")
return nil
}
This should allow me to provide an identical resource that is designed to interact with the already existing one created by its parent host.

Is in the kubernetes API a function to fetch all services by annotations

I'm setting up a kubernet cluster to roll out our container applications. The applications actually need all labels, but the labels are longer than 63 characters and I get an error. This makes me dependent on annotations.
An annotation for a service looks like this: com.example.development.london/component.proxy-config.secure-routes.backend.proxy-path. The / only serves to bypass an RFC domain error.
In a Golang application all services of a namespace are requested. Actually based on the labels. For this I have used the following code so far.
func (kc *KubernetesCollector) generateRoutes(errorChannel chan<- error) {
log.Println("INFO: Try to generate routes")
services, err := kc.iface.Services(kc.namespace).List(metav1.ListOptions{
LabelSelector: fmt.Sprintf("%s==true", ConvertLabelToKubernetesAnnotation(ProxyConfDiscoverableLabel)),
})
...
func ConvertLabelToKubernetesAnnotation(label string) string {
return strings.Replace(label, "com.example.development.london.", "com.example.development.london/", -1)
}
But there is no possibility to return the services using annotations. Does anyone know another way how I can get all services that apply to an annotation with Go?
As specified in the Kubernetes documentation, annotations are meant for non-identifying information, so naturally you shouldn't use them for finding objects.
If that's an option, you can attach a prefix (max length of 253 characters) to your label in this manner: <label prefix>/<label name>. Additional information can be found from the link provided above.
There is no FieldSelector for annotations. What you can do is get all services into your list and then filter them based on annotations found in each.

How to use kubebuilder's client.List method?

I'm working on a custom controller for a custom resource using kubebuilder (version 1.0.8). I have a scenario where I need to get a list of all the instances of my custom resource so I can sync up with an external database.
All the examples I've seen for kubernetes controllers use either client-go or just call the api server directly over http. However, kubebuilder has also given me this client.Client object to get and list resources. So I'm trying to use that.
After creating a client instance by using the passed in Manager instance (i.e. do mgr.GetClient()), I then tried to write some code to get the list of all the Environment resources I created.
func syncClusterWithDatabase(c client.Client, db *dynamodb.DynamoDB) {
// Sync environments
// Step 1 - read all the environments the cluster knows about
clusterEnvironments := &cdsv1alpha1.EnvironmentList{}
c.List(context.Background(), /* what do I put here? */, clusterEnvironments)
}
The example in the documentation for the List method shows:
c.List(context.Background, &result);
which doesn't even compile.
I saw a few method in the client package to limit the search to particular labels, or for a specific field with a specific value, but nothing to limit the result to a specific resource kind.
Is there a way to do this via the Client object? Should I do something else entirely?
So figured it out - the answer is to pass nil for the second parameter. The type of the output pointer determines which sort of resource it actually retrieves.
According to the latest documentation, the List method is defined as follows,
List(ctx context.Context, list ObjectList, opts ...ListOption) error
If the List method you are calling has the same definition as above, your code should compile. As it has variadic options to set the namespace and field match, the mandatory arguments are Context and objectList.
Ref: KubeBuilder Book

Resources