How can I create an OpenTelemetry backend? - open-telemetry

I am aware that there are open source backends like Jaeger, Zipkin etc and commercial vendors like DataDog, New Relic etc.
I wanted to know if there is any specifications we need to follow when creating a custom backend or if there is guide to on how to do it.
I know that i can host a server and send telemetry to the server's URL. When i do this via a Collector it is in proto format and if done via Exporter it is in json format. Are there any other such points to consider?

I wanted to know if there is any specifications we need to follow when creating a custom backend or if there is guide to on how to do it.
There is nothing like that. It's not within the scope of the OpenTelemetry project (at least for now). You are free to implement it in whatever way makes sense to you.
When i do this via a Collector it is in proto format and if done via Exporter it is in json format
This is not entirely correct. There are various options for protocol + encoding. There are both OTLP Proto over HTTP/gRPC and HTTP/JSON exporters.

Related

How to consume Google PubSub opencensus metrics using GoLang?

I am new in Google PubSub. I am using GoLang for the client library.
How to see the opencensus metrics that recorded by the google-cloud-go library?
I already success publish a message to Google PubSub. And now I want to see this metrics, but I can not find these metrics in Google Stackdriver.
PublishLatency = stats.Float64(statsPrefix+"publish_roundtrip_latency", "The latency in milliseconds per publish batch", stats.UnitMilliseconds)
https://github.com/googleapis/google-cloud-go/blob/25803d86c6f5d3a315388d369bf6ddecfadfbfb5/pubsub/trace.go#L59
This is curious; I'm surprised to see these (machine-generated) APIs sprinkled with OpenCensus (Stats) integration.
I've not tried this but I'm familiar with OpenCensus.
One of OpenCensus' benefits is that it loosely-couples the generation of e.g. metrics from the consumption. So, while the code defines the metrics (and views), I expect (!?) the API leaves it to you to choose which Exporter(s) you'd like to use and to configure these.
In your code, you'll need to import the Stackdriver (and any other exporters you wish to use) and then follow these instructions:
https://opencensus.io/exporters/supported-exporters/go/stackdriver/#creating-the-exporter
NOTE I encourage you to look at the OpenCensus Agent too as this further decouples your code; you reference the generic Opencensus Agent in your code and configure the agent to route e.g. metrics to e.g. Stackdriver.
For Stackdriver, you will need to configure the exporter with a GCP Project ID and that project will need to have Stackdriver Monitor enabled (and configured). I've not used Stackdriver in some months but this used to require a manual step too. Easiest way to check is to visit:
https://console.cloud.google.com/monitoring/?project=[[YOUR-PROJECT]]
If I understand the intent (!) correctly, I expect API calls will then record stats at the metrics in the views defined in the code that you referenced.
Once you're confident that metrics are being shipped to Stackdriver, the easiest way to confirm this is to query a metric using Stackdriver's metrics explorer:
https://console.cloud.google.com/monitoring/metrics-explorer?project=[[YOUR-PROJECT]]
You may wish to test this approach using the Prometheus Exporter because it's simpler. After configuring the Prometheus Exporter, when you run your code, it will be create an HTTP server and you can curl the metrics that are being generated on:
http://localhost:8888/metrics
NOTE Opencensus is being (!?) deprecated in favor of a replacement solution called OpenTelemetry.

consuming grpc service using go

I plan to use grpc as an inter-service sync communication protocol.
There are lots of different services and I have generated a pb.go file with all the relevant code for client and server using protoc with the go-rpc plugin.
Now I'm trying to figure out the best way or the common way of consuming this service from another service.
Here is what I have so far:
Option 1
use the .proto file from the service (download it)
run the protoc compiler and generate the ...pb.go file for the consumer to use
Option 2
because the ...pb.go is already generated on the grpc service side
to implement the server and my client is another service written in
go I can expose this as a sub module (another .mod file in a sub-
directory)
use go get github.com/usr/my-cool-grpc-service/client
Option 2 seems more appealing to me because it makes the consumption of a service very easy and available for all other services that may require it.
On the other hand I know that the .proto file is the contract that cann generate clients for many different languages and should be used the the source of truth.
I fear that by choosing option 2 I'm unaware of any possible pitfalls I might encounter with regards to backwards compatibility or any other topic..
So, what is the idiomatic way of consuming a gRPC service?

Parse Cloud - Why need this?

I'm new to parse and i've just setup my server and dashboard on my local machine.
For my use, i just not need the simple API from parse, i need to write a server (with NodeJS + Express) to handle users request.
I've just see how to integrate an Express application with parse, so my application instead of the server directly will use my server that will serve:
The standard Parse API (/classes etc)
All my others route, that could not to depend on Parse API
This is correct ?
Reading online i've see that Parse Cloud need to extend Parse functionality with additional "routing" (if i have understand well).
So, in my application i will have
The standard API (ad described up here)
All other routers (that could not depend on Parse)
Other routers (that come from Cloud) and use Parse API
So, Parse Cloud is just a "simple" way to write additional Routing ? (i've see that exists the job function too, but right now i've not studied it).
My question is just because i'm a little confused about the real needed, just would like to have more info on "when to use it"
Thanks
EDIT
I provide here an example (that in part come from Parse Docs).
I have a Video class with an director name field.
In my Application (iOs, Android etc) i setup a view that need to know all the Video provided from a particular director.
I will have three ways:
Get all Videos (/classes/videos) and then filter it directly in APP
Write an NodeJS + Express router endpoint (http://blabla.com/videos/XXX) where XXX is the director and then get the result with Parse JS API and send back it to the app
Write an Clound function (that if i have understand respond to /functions/) that do the same as the router one.
This is just a little example, but is this the usage of Parse Cloud ? (or at least, one on them :))

Management layer above Thrift

Thrift sounds awesome but can't find some basic stuff I'm used to in RPC frameworks (as HttpServlet). Example of the things I can't find: session management, filtering, upload/download progress.
I understand that the missing stuff might be a management layer on top of Thrift. If so, any example of such a layer? Perhaps AOP (Aspect Oriented)?
I can't imagine such a layer that compiles to all languages and that's I'm missing. Taking session management as an example, there might be several clients that all need to do some authentication and pass the session_id upon each RPC. I would expect a similar API for all languages doing so.
Anyone knows of a a management layer for Thrift?
So thrift itself is not going to help you out a lot here.
I have had similar desires, and have a few suggestions:
1. Put your management objects into the IDL
Simply add an api token or common transfer data struct as a parameter to all of your service methods. Set it as parameter id 15 so that it will always be the last parameter, even if you add others in the middle.
As the first step in your handler you can validate/store/do whatever with the extra data.
This has the advantage that it is valid in any platform that thrift supports.
2. Use thrift over http
If you use http as your transport, you can include whatever data as you want as http headers, and the thrift content as the body.
This will often require a custom http client for every platform you use to inject the data, and a custom handler on the server to use the data, but neither of those are prohibitively difficult.
3. Hack the protocol
It is possible to create your own custom protocol that wraps another protocol and injects custom data. Take a look at how the multiplexed protocol works in the thrift library for most languages:
c# here. It sends the method name across the wire as service:method. The multiplexed processor unwraps this encoding and passes it on to the appropriate processor.
I have used a similar method to encode arbitrary key/value pairs (like http headers) inside the method name.
The downside to this is that you need to write a more complicated extension for each platform you will be using. Once. It varies a bit from language to language how this works, but it is generally simple enough once you figure it out once.
These are just a few ideas I have had, and I am sure there are others. The nice thing about thrift is how the individual components are decoupled from each other. If you have special needs you can swap any of them out as you need to to add specific functionality.

Exporting data from Ada application with websocket

I'm developing a scholar project where I have a core written in Ada language that generates data information.
As requested by the project, I need to send all the new produced information, with a certain period, to a remote webserver via websocket.
In javascript language is really easy to connect to a web socket:
var exampleSocket = new WebSocket("ws://www.example.com/socketserver", "protocolOne");
I would be able to execute a similar command in Ada, is it possible?
May be possible to workaround the problem calling an html page (with GET parameters) containing javascript code so that this page manages the websocket with remote webserver?
For those still looking for this answer; AWS now supports websockets...
https://docs.adacore.com/aws-docs/aws/high_level_services.html#websockets
Both AWS and Black has supports websockets. AWS is the most mature of the two, so I suggest that you use that.

Resources