We already have the Twrip-RPC which provides rpc and rest endpoints . then why we need the
grpc-Gateway . What advantages it's providing compared to twirp. Is it like we can provide custom endpoints with grpc gateway is that the only difference. what grpc-gateway which Twrip-rpc can't do?
Twirp and gRPC gateway are similar. They both build API services out of a protobuf file definition.
Main differences:
gRPC only uses protobuf over HTTP2, which means browsers can't easily talk directly to gRPC-based services.
Twirp works over Protobuf and JSON, over HTTP 1.1 and HTTP2, so any client can easily communicate.
gRPC is a full framework with many features. Very powerful stuff.
Twirp is tiny and small. Only has a few basic features but it is a lot easier to manage.
To generate the RPC scaffold using the RPC framework for Go RPCs, we can consider gRPC from the beginning or look towards a more simple Twitch RPC, i.e. Twirp.
Common reasons for choosing Twirp over gRPC are as follows:
Twirp comes with HTTP 1.1 support.
Twirp supports JSON transport out of the gate.
gRPC re-implements HTTP/2 outside of net/http.
And reasons for gRPC over Twirp are:
gRPC supports streaming.
gRPC makes wire compatibility promises.
More functionality on the networking level.
Twirp supports JSON-encoded requests and responses in addition to the binary Protobuf-codec while it still behaves as an RPC. You can use HTTP POST on an endpoint like /twirp/MyService/SayHello with a JSON payload and receive a JSON response. Very similar to standard gRPC except optionally JSON.
For gRPC Gateway it's a little different. Here you can configure any HTTP REST endpoint on existing gRPC service. For example, MySevice.SayHello can map to GET /hello. This makes it very easy to implement a complete REST service on top of your gRPC definitions.
Hope this clarify it.
Related
ahc and ahc-ws (Async Http Client) components have been deprecated in Apache camel version 3.16: https://issues.apache.org/jira/browse/CAMEL-17667.
Is there an alternative for ahc-ws? The component was very easy to use to consume external websockets API.
Other libraries like Jetty, Undertow, Atmosphere, don't seem to offer this kind of features. I have not been able to configure them and the documentation remains unclear. They only provide the server part.
For the websocket-jsr356 component, I can't configure the component to consume a WebSockets over SSL API (wss). The library seems to support only classic websocket (ws).
I looked for alternatives on the camel doc, examples on github but I didn't find anything.
Is there a viable alternative to ahc-ws to consume external websocket APIs simply with camel?
Thanks a lot
Looks like it's not deprecated yet. There is just a suggestion for that. ahc-wss is very useful currently and there is no viable alternative for the same. websocket component requires tedious tweaking of secure storage parameters and is just kills the purpose of wss. I hope they don't deprecate ahc-wss without a proper replacement though.
Recently I am working with along with IOT department, right our project is on discussion and creating core architecture of an application. client specification is we must use MQTT protocol to communicate between device and java application (eclipse paho client).
its a web application based on spring boot and microservice architecture. but I an not able to find any good solution for API gateways that provide MQTT support.
I found zuul is good but do we have any alternative like kong..
MQTT is a TCP stream based protocol, so API Gateways that operate on the HTTP / Layer 7 are not going to fit the bill.
There are extensions to commercial API Gateways available, such as the Axway MQTT Proxy described here.
While not an API Gateway, Confluent also have a MQTT proxy that allows simple integration with Kafka, however if you have already written an application that implements the backend then Kafka is going to require some re-architecting.
The other options are really going for a simple TCP stream reverse proxy like nginx or HAProxy.
If I was asked to build something like this, I'd go straight to Kafka. It and MQTT are a very neat architectural fit and also operate very well together but it really depends on your requirements.
I was experimenting with reactive web framework.I have certain question regarding how it will work.
In typical application,we have datastore(Relational or No SQL).
Application Layer(Controllers) to connect to store and get the Data.
Client Layer(Calls your API End Points) and get the data.
To best of my knowledge,there are no async or reactive drivers published by Vendors.Only Mongo and may be Cassandra has reactive drivers).
Controller layer will beam back the data using Mono or Flux or Single.
Client layer will be consuming this data.
Since HTTP is synchronous in nature,how will client layer or application benefit from reactive support in Spring.
Question:Let us says I have 10 records in JSON coming from my Flux response.Does it mean,my client will get data in stream or entire data set will be fetched first at client side and then process of consuming it will be reactive in nature.Currently ,we have InputStream as a response of service call,which is blocking in nature,due to design of HTTP protocol.
Question:Does it then make sense to have reactive architecture for typical web application,when very medium on which we are going to get response is Blocking in Nature.
Spring Web Reactive makes use of Servlet 3.1 non-blocking I/O and runs on Servlet 3.1 containers. It also runs on non-Servlet runtimes such as Netty and Undertow. Each runtime is adapted to a set of shared, reactive ServerHttpRequest and ServerHttpResponse abstractions that expose the request and response body as Flux with full backpressure support on the read and the write side.
Source:
https://docs.spring.io/spring-framework/docs/5.0.0.M1/spring-framework-reference/html/web-reactive.html
Datastore vendors and OSS communities are working on that. There's already support for Cassandra, Couchbase, MongoDB and Redis in Spring Data Kay.
I think you're conflating the HTTP protocol itself and blocking Java APIs. You're not getting the full HTTP request or response in one big block, so the HTTP procotol is not synchronous. The underlying networking library you choose also drives the choice between blocking or non-blocking I/O.
Now about your HTTP client question: if you're using WebClient, the returned Flux will emit elements as soon as they're available. The underlying libraries are reading and decoding messages as soon as possible, while still respecting backpressure.
I'm not sure I get your last question - but if you're wondering when and why you should use a reactive approach: this approach has benefits if you're already running into scalability/efficiency issues, or if your application is communicating with many external services and is then sensitive to latency. See more about that in the Spring Framework 5.0 FAQ.
All the examples use graphql-relay-js on server
Can I handle graphql server without graphql-relay-js and use relay on client ?
Situation if one client use only graphql api and another client use relay to get data how to handle it ?
Relay requires that you follow certain conventions with your GraphQL schema; these conventions are documented in the GraphQL Relay Specification. Note that these conventions are useful whether or not you're using Relay as the client.
graphql-relay-js is a set of helpers to make it easier to implement the above specification. This module is not required in order to use Relay - you're free to implement the above spec manually.
What is the difference between the RMI Service Exporter and the HttpInvoker?
I know that the RMI uses RMI as underlying communication technology and the invoker standard http post. Any other differences worth noting?
RMI is a standard Java technology, portable in principle. You can easily interact with other Java applications.
Spring HTTP invoker is a proprietary technology. They, just like RMI, use Java serialization, but use standard HTTP protocol as the underlying network layer. On one hand this is less portable as you are limited to other Spring applications. On the other hand using standard HTTP protocol might be viewed as more portable, compared to binary RMI protocol.
Choose:
RMI if you need portability across Java applications
HTTP invoker if you need transparent network transport, working nicely with firewalls, etc.
SOAP/REST web services if your API should work across different platforms/clients and it needs to work using standard HTTP protocol
Thrift or protobuf if you need efficient and portable binary protocol