Using ProtoBuf-net with gRPC - protocol-buffers

I am trying to build a PoC at work that utilizes gRPC. The google document here takes us through a sample application. I was wondering if protobuf-net, and specifically protogen, had the capability to understand service definitions and classes necessary to perform gRPC calls? Or is this something being worked on? Would it work if I use google's protoc for client and server code generation(which involves the service definitions and RPC calls) and protobuf-net for my business objects.

protobuf-net.Grpc is now a thing... albeit in preview. When .NET Core 3 comes out, we should be able to make this available.
It is inspired by the WCF approach, so your service interfaces are defined via:
namespace Whatever {
[ServiceContract]
public interface IMyAmazingService {
ValueTask<SearchResponse> SearchAsync(SearchRequest request);
// ... etc
}
}
Servers just implement the interface:
public class MyServer : IMyAmazingService {
// ...
}
(how you host them depends on whether you're using ASP.NET Core, or the native/unmanaged gRPC libraries; both work)
and clients just request the interface:
var client = http.CreateGrpcService<IMyAmazingService>();
var result = await client.SearchAsync(query);
In the above case, this would be inferred to be the Whatever.MyAmazingService / Search service in gRPC terms, i.e.
package Whatever;
// ...
service MyAmazingService {
rpc Search (SearchRequest) returns (SearchResponse) {}
}
but the service/method names can be configured more explicitly if you prefer. The above is a unary example; for unary operations, the result can be any of T, Task<T>, ValueTask<T> - or void / Task / ValueTask (which all map to .google.protobuf.Empty, as does a method without a suitable input parameter).
The streaming/duplex operations are inferred automatically if you use IAsyncEnumerable<T> (for some T) for the input parameter (client-streaming), the return type (server-streaming), or both (duplex).

It is something that I would love to get around to, but to date, no: I haven't had need to look into this, and it hasn't hit the top of my backlog. I try and keep an eye on what features people want, so it is good to know that you're after it, but today: no. Mostly this is a time thing - protobuf-net gets progressed out of my free/spare time, unless I have a genuine justification to spend "work time" on it.
Update: I'm actively talking with the Microsoft folks who are working on gRPC for .NET, and it seems likely that we're going to try to work together here so that this becomes possible with the gRPC things in the .NET Core 3.0 timescale - meaning: we'd share an implementation of the service invocation code, but allow it to work with multiple serializer APIs.

Related

Is it good idea to use decorators in high-load application?

We are building High load backend api in nestjs.
I am searching for good solution for rest request validation.
We have some specific requirements for internationalization, so we decided not to use standard schema-based validation pipes, that does not handle internationalization well.
I am considering custom Mapper class for each request DTO. So it gets request data and transforms them into specific DTO:
class CreateAccountRequestMapper { map(data: any): CreateAccountRequestDto {} }
If the input is not valid, it will throw some API specific exception.
Is it good idea in terms of performance to implement this into decorators + pipes?
I do not know the concept well, but it seems to me that I would need to make unnecessary object instantiation on each request, while if I would use the mapper directly in handler I would avoid it.
Do decorators means significant overhead in general?

How to improve the gRPC development?

I find it's tedious to define the protobuf message again in the .proto file after the entity model is ready.
For example, exposure the CRUD operations through gRPC you need to define the table schema in .proto files in a message way because gRPC requires it.
In traditional restful API development, we don't need to define the messages because we just return some json, and the json object can be arbitrary.
Any suggestions?
P.S. I know the gRPC is more efficient than restful APIs at run time. However I find it's far less efficient than restful APIs at development time.
Before I found the elegant way to improve the efficiency I currently use an ugly way: define a JSON message type:
syntax = "proto3";
package user;
service User {
rpc FindOneByJSON(JSON) returns (JSON) {}
rpc CreateByJSON(JSON) returns (JSON) {}
}
message JSON {
string value = 1;
}
It's ugly because it need the invoker to JSON.stringify() the arguments and JSON.parse() the response.
Because gRPC and REST follow different concepts.
In REST, the server maintains the state and you just control it from the client (that's what you use GET, POST, PUT, UPDATE, DELETE request types for). In contrast, a procedure call has a well-defined return type that is reliable and self-describing. gRPC does not follow the concept of the server being the single source of truth concerning an object's state; instead -- conceptually -- you can interact with the server using regular calls, as you would on a local setup.
By the way, in good RESTful design, you do use schemas for your JSON returns, so in fact it is not arbitrary, even though you can abuse it to be. For example, check the OpenAPI 3 specification for the response object definition: They usually contain references to schemas.

How to use a single AWS Lambda for both Alexa Skills Kit and API.AI?

In the past, I have setup two separate AWS lambdas written in Java. One for use with Alexa and one for use with Api.ai. They simply return "Hello world" to each assitant api. So although they are simple they work. As I started writing more and more code for each one, I started to see how similar my java code was and I was just repeating myself by having two separate lambdas.
Fast forward to today.
What I'm working on now is having a single AWS lambda that can handle input from both Alexa and Api.ai but I'm having some trouble. Currently, my thought is that when the lambda is run, there would be a simple if statement like so:
The following is not real code, just what I think I can do in my head
if (figureOutIfInputType.equals("alexa")){
runAlexaCode();
} else if (figureOutIfInputType.equals("api.ai")){
runApiAiCode();
}
The thing is now I need to somehow tell if the function is being called by an alexa or api.ai.
This is my actual java right now:
public class App implements RequestHandler<Object, String> {
#Override
public String handleRequest(Object input, Context context) {
System.out.println("myLog: " + input.toString());
return "Hello from AWS";
}
I then ran the lambda from Alexa and Api.ai to see what Object input would get generated in java.
API.ai
{id=asdf-6801-4a9b-a7cd-asdffdsa, timestamp=2017-07-
28T02:21:15.337Z, lang=en, result={source=agent, resolvedQuery=hi how
are you, action=, actionIncomplete=false, parameters={}, contexts=[],
metadata={intentId=asdf-3a2a-49b6-8a45-97e97243b1d7,
webhookUsed=true, webhookForSlotFillingUsed=false,
webhookResponseTime=182, intentName=myIntent}, fulfillment=
{messages=[{type=0, speech=I have failed}]}, score=1}, status=
{code=200, errorType=success}, sessionId=asdf-a7ac-43c8-8ae8-
bc1bf5ecaad0}
Alexa
{version=1.0, session={new=true, sessionId=amzn1.echo-api.session.asdf-
7e03-4c35-9d98-d416eefc5b23, application=
{applicationId=amzn1.ask.skill.asdf-a02e-4938-a747-109ea09539aa}, user=
{userId=amzn1.ask.account.asdf}}, context={AudioPlayer=
{playerActivity=IDLE}, System={application=
{applicationId=amzn1.ask.skill.07c854eb-a02e-4938-a747-109ea09539aa},
user={userId=amzn1.ask.account.asdf}, device=
{deviceId=amzn1.ask.device.asdf, supportedInterfaces={AudioPlayer={}}},
apiEndpoint=https://api.amazonalexa.com}}, request={type=IntentRequest,
requestId=amzn1.echo-api.request.asdf-5de5-4930-8f04-9acf2130e6b8,
timestamp=2017-07-28T05:07:30Z, locale=en-US, intent=
{name=HelloWorldIntent, confirmationStatus=NONE}}}
So now I have both my Alexa and Api.ai output, and they're different. So that's good. I'll be able to tell which one is which. but I'm stuck. I'm not really sure if I should try to create an AlexaInput object and an ApiAIinput object.
Am I doing this all wrong? Am I wrong with trying to have one lambda fulfill my "assistant" requests from more than one service (Alexa and ApiAI)?
Any help would be appreciated. Surely, someone else must be writing their assistant functionality in AWS and wants to reuse their code for both "assistant" platforms.
I had the same question and same thought, but as I got further and further in implementing, I realized that it wasn't quite practical for one big reason:
While a lot of my logic needed to be the same - the format of the results was different. Sometimes, even the details or formatting of the results would be different.
What I did was go back to some concepts that were familiar in web programming by dividing it into two parts:
A back-end system that was responsible for taking parameters and applying the business logic to produce results. These results would be fairly low-level, not entire phrases, but more a set of keys/value pairs that indicated what kind of result to give and what values would be needed in that result.
A front-end system that was responsible for handling things that were Alexa/Assistant specific. So it would take the request, extract parameters and state, call the back-end system with this information, get a result back which included what kind of reply to send and the values needed, and then format the exact phrase (and any other supporting info, such as a card or whatever) and put it into a properly formatted response.
The front-end components would be a different lambda function for each agent type, mostly to make the logic a little cleaner. The back-end components can either be a library function or another lambda function, whatever makes the most sense for the task, but is independent of the front-end implementation.
I suppose one could also this by having an abstract parent class that implements the back-end logic, and having the front-end logic be subclasses of this. I wouldn't do it this way because it doesn't provide as clear an interface boundary between the two, but its not unreasonable.
You can achieve the result (code reuse) a different way.
Firstly, create a method for each type of event (Alexa, API Gateway, etc) using the aws-lambda-java-events library. Some information here:
http://docs.aws.amazon.com/lambda/latest/dg/java-programming-model-handler-types.html
Each entry point method should deal with the semantics of the event triggering it (API Gateway) and call into common code to give you code reuse.
Secondly, upload your JAR/ZIP to an S3 bucket.
Thirdly, for each event you want to handle - create a Lambda function, referencing the same ZIP/JAR in the S3 bucket and specifying the relevant entry point.
This way, you'll get code reuse without having to juggle multiple copies of the code on AWS, albeit at the cost of having multiple Lambdas defined.
There's a great tool that supports working this way called Serverless Framework which I'd highly recommend looking at:
https://serverless.com/framework/docs/providers/aws/
I've been using a single Lambda to handle Alexa ASK and Microsoft Luis.ai responses. I'm using Python instead of Java but the idea is the same and I believe that using an AlexaInput and ApiAIinput object, both extending the same interface should be the way to go.
I first use the context information to identify where the request is coming from and parse it into the appropriate object (I use a simple nested dictionary). Then pass this to my main processing function and finally, pass the output to a formatter again based on the context. The formatter will be aware of what you need to return. The only caveat is that handling session information; which in my case I serialize to my own DynamoDB table anyway.

How to make afterInsert / afterUpdate GORM method an async methods

Grails users know that Data access layer of this framework offer an AOP programming via seperation cross-layer from other soft layers : afterInsert, afterUpdate,beforeInsert .... methods .
class Person{
def afterInsert(){
//... Will be executed after inserting record into Person table
}
}
I search on the type of this methods vis-a-vis Constructor(instantiation ): Asynchronous or not . And i don't find the answer .
My question : if not, Does GORM will be breaked if we force those methods to be asynchronous.
UPDATE :
Indeed, we want send mails without using a ready plugin as we have our own API.
There are a great number of ways to accomplish what you are looking for, and without knowing all your requirements it's difficult to give you a solution that meets all of them. However, based on your question and the comments provided you could use the built in Asynchronous features in Grails to accomplish this.
This is just a sketch/example of something I came up with off the top of my head.
import static grails.async.Promises.*
class Person {
...
def afterUpdate() {
def task1 = task {
// whatever code you need to run goes here.
}
onComplete([task1]) {
// anything you want to run after the task completes, or nothing at all.
}
}
...
}
This is just one option. Again, there are a lot of options available to you. You could send a JMS message instead and have it processed on a different machine. You could use some type of eventing system, you could even use Spring AOP and Thread pools and abstract this even further. It depends on what your requirements are, and what your capabilities are as well.

How to consume multiple services using ServiceTracker efficiently?

I would like to use ServiceTracker in order to consume the services published by our company.
Instead of creating new ServiceTracker for each service I want to consume I thought it would be better to create just one with a filter and then get the services from it:
Filter filter = ctx.createFilter("(" + Constants.OBJECTCLASS + "=com.mycomp*)");
tracker = new ServiceTracker(ctx, filter, null);
The problem with this approach is that I then need to iterate over the service references the tracker had found examine their objectClass property and see if I can assign it to the service object which is very cumbersome and error prone due to casting that is required.
Any other ideas how to cunsume multiple services using more elegant way?
I think it is the wrong question :-) From the question I infer that you have a method that takes a service from your company and you want that method called. That is, somewhere in your code you need to be informed about a specific type com.mycomp.X, that is, you're not interested in general services from your company, you have a clear type dependency. In your question you assume that they need to be dispatched centrally which is usually not robust, error prone, and a maintenance hotspot; every time you have a new company service you need to update the dispatch method.
A MUCH better solution seems to be to use Declarative services and use bndtools with annotations. In that model, each place where you need service:
#Component public class SomeMyCompComponent {
...
#Reference
void foo( com.mycomp.X x ) { ... }
...
}
In this model, you do not need to centrally maintain a dispatcher, any class can get the services it needs when they need it. This model also accurately handles multiple dependencies and lots more goodies.
Maybe I do not understand the problem correctly because I inferred the problem from the solution you required. However, I think you try to abuse the Service Tracker for a task it was not intended to do.
Unfortunately, DS is not build into the framework as we should have done :-(
You could subclass ServiceTracker and add methods to provide direct access to the service types in which you are interested. For example, you could store the services in a typesafe heterogeneous container [1]. Then you would be able to call method on your ServiceTracker subclass which take the type of the service you are interested in and they could be easily looked up in the typesafe heterogeneous container.
[1] Effective Java, 2nd Ed., Item 29.

Resources