Planning to use Protocol Buffers for event driven communication across services with Kafka at the heart of these.
I'll maintain the schemas in Github Repo. Changes to the schema would come as Pull Request. In the CI check of the pull request wanted to add a validation to check the backward compatiblity of the Schema change.
Do you know of any open solution to this problem? Or everybody is writing their own compatiblity checker :) ?
See: https://github.com/nilslice/protolock
From the README: "Protocol Buffer companion tool. Track your .proto files and prevent changes to messages and services which impact API compatibility."
I discovered the tool as a result of your interesting question. I'd not realized that I would also benefit from this too!
Another, also referenced by Uber's Prototool is Buf
Tried out buf it is a really nice tooling for using ProtoBuf.
Enforces API standards with it's lint feature
buf lint
Maintains compatibility with it's breaking feature
buf breaking --against ./.git#branch=master
Generates code for you without any headache with its generate feature. You just need to create a buf.gen.yaml
buf generate
Please refer this for more details https://docs.buf.build/tour/generate-code
Related
My library is github.com/influxdata/influxdb1-client/v2,and I need to use it to operate the VictoriaMetrics, For example, I need to make the following query:
sort_desc(avg(idc_bandwidth_5m_data_cube_just_idc_bandwidth_kilobits{idc=~"$cluster", isp=~"$isp"}[5m]) by (idc))
What should I do? Or is there another better library to use? Help me!!!
Can you give me a sample code?
I'm afraid github.com/influxdata/influxdb1-client/v2 can't be used for reading data from VcitoriaMetrics. You need a library which can send PromQL/MetricsQL queries via HTTP and parse responses. I'm not sure if there are a good Golang lib for that. I've heard only about JS lib.
In general, sending queries and parsing responses to VictoriaMetrics or Prometheus is rarely needed. And when needed, it is usually implemented from scratch. Please use the following link as a reference.
You might be also interested in the following issue I'm afraid github.com/influxdata/influxdb1-client/v2 can't be used for reading data from VcitoriaMetrics. You need a library which can send PromQL/MetricsQL queries via HTTP and parse responses. I'm not sure if there are a good Golang lib for that. I've heard only about JS lib.
In general, sending queries and parsing responses to VictoriaMetrics or Prometheus is rarely needed. And when needed, it is usually implemented from scratch. Please use the following link as a reference.
You might be also interested in the following issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/108
I currently have a primitive RPC setup relying on JSON transferred over secured sockets, but I would like to switch to gRPC. Unfortunately I also need access to AF_UNIX on windows (Which Microsoft recently started supporting, but gRPC has not implemented).
Since I have an existing working connection (managed with a different library), my preference would be to just use that in conjunction with GRPC to send/receive commands in place of my JSON parsing, but I am struggling to identify the best way to do that.
I have seen Plugging custom transport into gRPC but this question differs in the following ways (As well as my hope for a more recent answer)
I am wanting to avoid making changes to the core of gRPC. I'd prefer to extend it if possible from within my library, but the answer here implies adding a new transport to gRPC.If I did need to do this at the transport level, is there a mechanism to register it with gRPC after the core has been built?
I am unsure if I need to define this as a full custom transport, since I do already have an existing connection established and ready. I have seen some things that imply I could simply extend Channel, but I might be wrong.
I need to be able to support Windows, or at least modern versions of it (Which means that the from_fd options gRPC provides are not available since they are currently only implemented for POSIX)
Has anyone solved similar problems with gRPC?
I may have figured out my own answer. I seem to have been overly focused on gRPC, when the service definition component of Protobuf is not dependent on that.
How can i write my own RPC Implementation for Protocol Buffers utilizing ZeroMQ is very similar to my use case, with https://developers.google.com/protocol-buffers/docs/proto#services seeming to resolve my issue (And this also explains why I seem to have been mixing up the different kinds of "Channels" involved
I welcome any improvements/suggestions, and hope that maybe this can be found in future searches by people that had the same confusion.
I'm Using Mule 4.2.2 Runtime. We use the errorHandling generated by APIKIT and we customized it according to customer requirement's, which is quite standard across all the upcoming api's.
Thinking to convert this as a connector so that it will appear as component/connector in palette to reuse across all the api's instead copy paste everytime.
Like RestConnect for API specification which will automatically convert in to connector as soon as published in Exchange ( https://help.mulesoft.com/s/article/How-to-generate-a-connector-for-a-REST-API-for-Mule-3-x-and-4-x).
Do we have any option like above publishing mule common flow which will convert to component/connector?
If not, which one is the best way suits in my scenario
1) using SDK
https://dzone.com/articles/mulesoft-custom-connector-using-mule-sdk-for-mule (or)
2) creating jar as mentioned in this page
[https://www.linkedin.com/pulse/flow-reusability-mule-4-nagaraju-kshathriya][2]
Please suggest which one is best and easy way in this case? Thanks in advance.
Using the Mule SDK (1) is useful to create a connector or module in Java. Your questions wasn't fully clear about what do want to encapsulate in a connector. I understand that you want is to share parts of a flow as a connector in the palette, which is different. The XML SDK seems to be more inline with that. You will need to make some changes to encapsulate the flow elements, as described in the documentation. That's actually very similar to how REST connect works.
The method described in (2) is for importing XML flows from a JAR file, but the method described by that link is actually incorrect for Mule 4. The right way to implement sharing flows through a library is the one described at https://help.mulesoft.com/s/article/How-to-add-a-call-to-an-external-flow-in-Mule-4. Note that this method doesn't create a connector that can be used from Anypoint Studio palette.
From personal experience - use common flow, put it to repository and include it as dependency to pom file. Even better solution - include is as flow to the Domain app and use it alone with your shared https connector.
I wrote a lot of Java based custom components. I liked them a lot and was proud of them. But transition from Mule3 to Mule4 killed most of them. Even in Mule4 Mulesoft makes changes periodically which make components incompatible with runtime.
I was implementing a project with spring-data-jdbc and I found the Statement Builder API. I could this API be used for building native sql repositories?
At this point the StatementBuilder API is considered internal and use outside Spring Data is not encouraged, because it might undergo breaking changes without the normal cycle of deprecation.
That said it is a pretty isolated piece of code with a friendly OSS license so in many cases it might be an acceptable risk to use it with the fallback plan to clone it into an own package should it change in a way that is not useable for you.
Since you have to share .proto files to define data and services for gRPC, service provider and clients need to access the same .proto files. Is there any common strategy to distribute these files? I want to avoid that every project has its own .proto file in its Git repository and our team members need to manual edit these files or share them via email.
Is there any common best practice?
Unfortunately, there is no common practice, although the main goal that you should achieve is to store proto files in one version control repository.
During my investigation, I've found some interesting blog posts about that subject:
https://www.bugsnag.com/blog/libraries-for-grpc-services
https://www.crowdstrike.com/blog/improving-performance-and-reliability-of-microservices-communication-with-grpc/
https://medium.com/namely-labs/how-we-build-grpc-services-at-namely-52a3ae9e7c35
They covers much of gRPC workflow considerations. Hope that helps!
In terms of best practices in sharing gRPC definitions, I would suggest instead of sharing those files, to use GRPC Server Reflection Protocol (https://github.com/grpc/grpc/blob/master/doc/server-reflection.md)