Graphql + Relay graphql-relay-js dependency - graphql

All the examples use graphql-relay-js on server
Can I handle graphql server without graphql-relay-js and use relay on client ?
Situation if one client use only graphql api and another client use relay to get data how to handle it ?

Relay requires that you follow certain conventions with your GraphQL schema; these conventions are documented in the GraphQL Relay Specification. Note that these conventions are useful whether or not you're using Relay as the client.
graphql-relay-js is a set of helpers to make it easier to implement the above specification. This module is not required in order to use Relay - you're free to implement the above spec manually.

Related

Hybrid application with GrapghQL and Nest

Suppose I am building a system in a microservices architecture. Instead of using Rest, I chose GraphQL. Thus, I have several services that have no controllers but resolvers.
Now I would like to call a method from service2 (microservice) in the service1 (microservice) and get the result from the service2.
Normally, I could use a Hybrid application in NestJs with #MessagePattern. Nevertheless, when using GraphQL, the reference must be to query or mutaion in Service 2's Resolver.
So how with this arrangement to establish a connection between services 1 and 2 using resolvers?
It sounds like what you want is schema stitching. This will allow you to add remote schemas (essentially methods from other services) to your graphQL server.
I'm more familiar with C# implementations of GraphQL e.g. ChilliCream Hot Chocolate which has info on schema stiching here: https://chillicream.com/docs/hotchocolate/distributed-schema/schema-stitching
Since you mentioned NestJs in your question, I'll link this previous stack overflow question in: How to get multiple remote schemas stitched with Nestjs and apollo server

How to use Apollo Server DataSource to call a GraphQL API

In our GraphQL api (Apollo-Server) we would like to add a new dataSource which accesses GitHub's GraphQL api. We are looking to consume this data. It appears that using apollo-datasource-rest` is a good approach to do this. It's an established, still maintained module which provides caching, access to context and other dataSource benefits. It's also managed by the Apollo team. We want to verify that this is a good approach for making requests to other GraphQL APIs.
Other options are:
Roll your own datasource, which doesn't seem necessary or with apparent benefits
Build out a datasource using #apollo/client
There is a module, apollo-datasource-graphql, which appears fits this perfectly, though it has not been updated in two years and appears it may be unfinished with tests and request caching not complete.
Is using apollo-datasource-rest a good practice for accessing other GraphQL APIs as a dataSource in a GraphQL server service?
Is there a better, more established approach for doing this?
We are having the same concern since our backend needs to consume, as a client, a graphql api. The REST interface approach is expecting http GET queries to be cacheable, but not verbs like POST, PUT, DELETE... My understanding of GraphQL is that if you are only using http POST as a communication pattern this is going to prevent apollo-datasource-rest to handle caching for your queries and then it may not be the appropriate lib.
Other approaches to consider:
apollo-datasource-http
Apollo server (and the GraphQL specification) also supports GET queries so it may solve apollo-datasource-rest caching issues
usage of graphql-code-generator to generate the consumer of the target GraphQL api (and then use the client directly inside a service, or define a custom datasource to wrap the client)

What is the difference between grpc-gateway vs Twirp RPC

We already have the Twrip-RPC which provides rpc and rest endpoints . then why we need the
grpc-Gateway . What advantages it's providing compared to twirp. Is it like we can provide custom endpoints with grpc gateway is that the only difference. what grpc-gateway which Twrip-rpc can't do?
Twirp and gRPC gateway are similar. They both build API services out of a protobuf file definition.
Main differences:
gRPC only uses protobuf over HTTP2, which means browsers can't easily talk directly to gRPC-based services.
Twirp works over Protobuf and JSON, over HTTP 1.1 and HTTP2, so any client can easily communicate.
gRPC is a full framework with many features. Very powerful stuff.
Twirp is tiny and small. Only has a few basic features but it is a lot easier to manage.
To generate the RPC scaffold using the RPC framework for Go RPCs, we can consider gRPC from the beginning or look towards a more simple Twitch RPC, i.e. Twirp.
Common reasons for choosing Twirp over gRPC are as follows:
Twirp comes with HTTP 1.1 support.
Twirp supports JSON transport out of the gate.
gRPC re-implements HTTP/2 outside of net/http.
And reasons for gRPC over Twirp are:
gRPC supports streaming.
gRPC makes wire compatibility promises.
More functionality on the networking level.
Twirp supports JSON-encoded requests and responses in addition to the binary Protobuf-codec while it still behaves as an RPC. You can use HTTP POST on an endpoint like /twirp/MyService/SayHello with a JSON payload and receive a JSON response. Very similar to standard gRPC except optionally JSON.
For gRPC Gateway it's a little different. Here you can configure any HTTP REST endpoint on existing gRPC service. For example, MySevice.SayHello can map to GET /hello. This makes it very easy to implement a complete REST service on top of your gRPC definitions.
Hope this clarify it.

Elastic search high/low rest client vs spring rest template

I am in a dilemma over to use spring's rest template or elasticsearch's own high/low rest client while searching in es . Does es client provide any advantage like HTTP connection pooling , performance while compared to spring rest template . Which of the two take less time in getting response from the server . Can some one please explain this ?
The biggest advantage of using Spring Data Elasticsearch is that you don't have to bother about the things like converting your requests/request bodies/responses from your POJO domain classes to and from the JSON needed by Elasticsearch. You just use the methods defined in the ElasticsearchOperations class which is implemented by the *Template classes.
Or going one abstraction layer up, use the Repository interfaces the all the Spring Data modules provide to store and search/retrieve your data.
Firstly, This is a very broad question. Not sure if it suits the SO guidelines.
But my two cents:
High Level Client uses Low Level client which does provide connection pooling
High Level client manages the marshalling and unmarshalling of the Elastisearch query body and response, so it might be easier to work using the APIs.
On the other hand, if you are familiar with the Elasticsearch querying by providing the JSON body then you might find it a bit difficult to translate between the JSON body and the Java classes used for creating the query (i.e when you are using Kibana console or other REST API tools)
I generally overcome this by logging the query generated by the Java API so that I can use it with Kibana console or other REST API tools.
Regarding which one is efficient- the library will not matter that much to affect the response times.
If you want to use Spring Reactive features and make use of WebClient, ES Libraries do provide support for Async search.
Update:
Please check the answer by Wim Van den Brande below. He has mentioned a very valid point of using Transport Client which has been deprecated over REST API.
So it would be interesting to see how RestTemplate or Spring Data ElasticSearch will update their API to replace TransportClient.
One important remark and caveat with regards to the usage of Spring Data Elasticsearch. Currently, Spring Data Elasticsearch doesn't support the communication by the High Level REST Client API. They are using the transport client. Please note, the TransportClient is deprecated as of Elasticsearch 7.0.0 and is expected to be removed in Elasticsearch 8.0!!!
FYI, this statement has been confirmed already by another post: Elasticsearch Rest Client with Spring Data Elasticsearch

ElasticSearch HTTP client vs Transport client

What is the best practice using ElasticSearch from Java?
For instance one can easily find documentation and examples for delete-by-query functionality using REST API.
This is not the case for Java Transport Client.
Where can I find usage examples for Java Transport Client ?
Does Java Transport Client cover whole ElasticSearch functionality like HTTP client via REST API?
The best practice of using Elasticsearch from Java: Follow This
Nexts:
You can follow the library : JEST
Yes, in maximum time, Java Transport Client cover whole ElasticSearch functionality like HTTP client via REST API
To complete #sunkuet02 answer:
As mentioned in documentation, the TransportClient is the preferred way if you're using java (performance, serialization, ...).
But the jar is quite heavy (with dependencies) and it requires that you use the same versions between client and server to work.
If you want a very lightweight client, there is a new Java REST client in Elastic 5.x.
I don't know the differences with Jest, but it's compatible with last Elastic 5.x versions (but not with previous versions) and it's developed by the Elastic team.
So it might be a good option to consider, according to your needs.

Resources