Can Envoy's gRPC-JSON transcoding filter dynamically pull in descriptor set updates? - protocol-buffers

I've got an existing setup in which I've configured Envoy to do gRPC <-> JSON transcoding as described here. I'd like to be able to dynamically update the descriptor set that Envoy is using to derive the gRPC method shapes, without needing to restart Envoy, but I can't tell from the documentation whether this is possible or not.
Does anyone know if it is?

Related

How can I create an OpenTelemetry backend?

I am aware that there are open source backends like Jaeger, Zipkin etc and commercial vendors like DataDog, New Relic etc.
I wanted to know if there is any specifications we need to follow when creating a custom backend or if there is guide to on how to do it.
I know that i can host a server and send telemetry to the server's URL. When i do this via a Collector it is in proto format and if done via Exporter it is in json format. Are there any other such points to consider?
I wanted to know if there is any specifications we need to follow when creating a custom backend or if there is guide to on how to do it.
There is nothing like that. It's not within the scope of the OpenTelemetry project (at least for now). You are free to implement it in whatever way makes sense to you.
When i do this via a Collector it is in proto format and if done via Exporter it is in json format
This is not entirely correct. There are various options for protocol + encoding. There are both OTLP Proto over HTTP/gRPC and HTTP/JSON exporters.

unable to modify flow in Apache NiFi 1.14.0 in HTTP mode

I understand that the official documentation recommends using NiFi with HTTPS, but it nonetheless contains a word for using NiFi under HTTP, like the nifi.web.http.port property.
Also, I'd like to incrementally incorporate and evolve the NiFi instance into our's current data infrastructure, starting with non-critical data pipelines. So, the TLS layer right now is not necessary and could add friction during the deployment phase. So, I decide to go on the HTTP path.
After changing some settings, I am able to access NiFi's GUI at http://localhost:8080/nifi but I find out that I cannot make any change to the Flow. Write operations, i.e POST / PUT / DELETE requests, are rejected by HTTP 403.
NiFi doc says:
And by monitoring the API traffic between the GUI and NiFi instance, I can confirm that the PermissionsEntity has both canRead:true and canWrite:true.
I used a containerized NiFi instance.
Has anyone also encounter similar problems?
The root canvas may have been set for the default single-user that NiFi 1.14 generates if it starts up without security configuration.
First thing to try is right-clicking on the canvas and granting yourself access if you can.
The second option: try (re)moving the flow.xml.gz, users.xml and authorizations.xml and then restarting Nifi. New files will be generated that may work better with anonymous access.
Either way, setting up security now will probably mean less friction down the road, not more. I strongly advise you to bite the bullet and get it set up securely.

consuming grpc service using go

I plan to use grpc as an inter-service sync communication protocol.
There are lots of different services and I have generated a pb.go file with all the relevant code for client and server using protoc with the go-rpc plugin.
Now I'm trying to figure out the best way or the common way of consuming this service from another service.
Here is what I have so far:
Option 1
use the .proto file from the service (download it)
run the protoc compiler and generate the ...pb.go file for the consumer to use
Option 2
because the ...pb.go is already generated on the grpc service side
to implement the server and my client is another service written in
go I can expose this as a sub module (another .mod file in a sub-
directory)
use go get github.com/usr/my-cool-grpc-service/client
Option 2 seems more appealing to me because it makes the consumption of a service very easy and available for all other services that may require it.
On the other hand I know that the .proto file is the contract that cann generate clients for many different languages and should be used the the source of truth.
I fear that by choosing option 2 I'm unaware of any possible pitfalls I might encounter with regards to backwards compatibility or any other topic..
So, what is the idiomatic way of consuming a gRPC service?

Convert Resuable ErrorHandling flow in to connector/component in Mule4

I'm Using Mule 4.2.2 Runtime. We use the errorHandling generated by APIKIT and we customized it according to customer requirement's, which is quite standard across all the upcoming api's.
Thinking to convert this as a connector so that it will appear as component/connector in palette to reuse across all the api's instead copy paste everytime.
Like RestConnect for API specification which will automatically convert in to connector as soon as published in Exchange ( https://help.mulesoft.com/s/article/How-to-generate-a-connector-for-a-REST-API-for-Mule-3-x-and-4-x).
Do we have any option like above publishing mule common flow which will convert to component/connector?
If not, which one is the best way suits in my scenario
1) using SDK
https://dzone.com/articles/mulesoft-custom-connector-using-mule-sdk-for-mule (or)
2) creating jar as mentioned in this page
[https://www.linkedin.com/pulse/flow-reusability-mule-4-nagaraju-kshathriya][2]
Please suggest which one is best and easy way in this case? Thanks in advance.
Using the Mule SDK (1) is useful to create a connector or module in Java. Your questions wasn't fully clear about what do want to encapsulate in a connector. I understand that you want is to share parts of a flow as a connector in the palette, which is different. The XML SDK seems to be more inline with that. You will need to make some changes to encapsulate the flow elements, as described in the documentation. That's actually very similar to how REST connect works.
The method described in (2) is for importing XML flows from a JAR file, but the method described by that link is actually incorrect for Mule 4. The right way to implement sharing flows through a library is the one described at https://help.mulesoft.com/s/article/How-to-add-a-call-to-an-external-flow-in-Mule-4. Note that this method doesn't create a connector that can be used from Anypoint Studio palette.
From personal experience - use common flow, put it to repository and include it as dependency to pom file. Even better solution - include is as flow to the Domain app and use it alone with your shared https connector.
I wrote a lot of Java based custom components. I liked them a lot and was proud of them. But transition from Mule3 to Mule4 killed most of them. Even in Mule4 Mulesoft makes changes periodically which make components incompatible with runtime.

How can I embed NetLimiter in my application

I have a C# client application that connects to multiple servers. I noticed that it is necessary to use NetLimiter activated rules in order to make my client connect correctly with higher priority when there is so many traffic on the client computer.
I did not find any documents about how can I embed and make rules programmatically in this application. However, I read here that someone tried to use Netlimiter API but failed.
I read somewhere that I can write my own application that uses TC API of the Windows in here and mark DSCP to make priorities. But I reached to this problem before setting flow options of my C# application.
Please guide me with this issue.
Look here. Connect() and SetRule() are the only APIs available.
NetLimiter seems to be a COM object, so to use it from C# you need something like this:
dynamic myownlimiter = Activator.CreateInstance(Type.GetTypeFromProgID("NetLimiter.VirtualClient"));
myownlimiter.Connect("host", "port");
and then use SetRule() as described in the first link.

Resources