Suppose I am building a system in a microservices architecture. Instead of using Rest, I chose GraphQL. Thus, I have several services that have no controllers but resolvers.
Now I would like to call a method from service2 (microservice) in the service1 (microservice) and get the result from the service2.
Normally, I could use a Hybrid application in NestJs with #MessagePattern. Nevertheless, when using GraphQL, the reference must be to query or mutaion in Service 2's Resolver.
So how with this arrangement to establish a connection between services 1 and 2 using resolvers?
It sounds like what you want is schema stitching. This will allow you to add remote schemas (essentially methods from other services) to your graphQL server.
I'm more familiar with C# implementations of GraphQL e.g. ChilliCream Hot Chocolate which has info on schema stiching here: https://chillicream.com/docs/hotchocolate/distributed-schema/schema-stitching
Since you mentioned NestJs in your question, I'll link this previous stack overflow question in: How to get multiple remote schemas stitched with Nestjs and apollo server
Related
I would like to get some clarity on terminology of microservices.
Reference to the diagram mentioned below.
All Represents the Microservice Architecture
Microservice - Does it refer the service which are exposed as API to channel [ Be it browser / Native app / Host ] or even the service which not exposed [ Underlying
Generic
Orchestrated
Atomic
As per the diagram, Links from orchestrated to atomic were mentioned.
Does it have to be always a [REST/ HTTP over call] or is it can be normal Java library method call packaged in the same runnable package.
All tutorials says / goes 1 Microservice = 1 Rest based service or anything exposed as controller to be called from
Can we call library or DAO Generic Service also a microservice?
Microservice Architecture ViewPoint
Microservice ViewPoint 2
Comparison
Does it refer the service which are exposed as API to channel or even the service which not exposed
A microservice is a service that serve a business need - they are "Componentization via Services" - componentes of a bigger system, so they don't necessary need to be exposed to external world, but they can be.
Does it have to be always a REST/ HTTP over call, or is it can be normal Java library method call packaged in the same runnable package.
Microservices communicate over network, but it does not have to be HTTP / REST, it can also be a Kafka topic or gRPC or something else. The important part is that they must be independently deployable e.g. you can upgrade a single microservice without needing to change another service at the same time.
See Martin Fowler - Microservices - 9 characteristics for the most commonly accepted definition.
In our GraphQL api (Apollo-Server) we would like to add a new dataSource which accesses GitHub's GraphQL api. We are looking to consume this data. It appears that using apollo-datasource-rest` is a good approach to do this. It's an established, still maintained module which provides caching, access to context and other dataSource benefits. It's also managed by the Apollo team. We want to verify that this is a good approach for making requests to other GraphQL APIs.
Other options are:
Roll your own datasource, which doesn't seem necessary or with apparent benefits
Build out a datasource using #apollo/client
There is a module, apollo-datasource-graphql, which appears fits this perfectly, though it has not been updated in two years and appears it may be unfinished with tests and request caching not complete.
Is using apollo-datasource-rest a good practice for accessing other GraphQL APIs as a dataSource in a GraphQL server service?
Is there a better, more established approach for doing this?
We are having the same concern since our backend needs to consume, as a client, a graphql api. The REST interface approach is expecting http GET queries to be cacheable, but not verbs like POST, PUT, DELETE... My understanding of GraphQL is that if you are only using http POST as a communication pattern this is going to prevent apollo-datasource-rest to handle caching for your queries and then it may not be the appropriate lib.
Other approaches to consider:
apollo-datasource-http
Apollo server (and the GraphQL specification) also supports GET queries so it may solve apollo-datasource-rest caching issues
usage of graphql-code-generator to generate the consumer of the target GraphQL api (and then use the client directly inside a service, or define a custom datasource to wrap the client)
I am in a dilemma over to use spring's rest template or elasticsearch's own high/low rest client while searching in es . Does es client provide any advantage like HTTP connection pooling , performance while compared to spring rest template . Which of the two take less time in getting response from the server . Can some one please explain this ?
The biggest advantage of using Spring Data Elasticsearch is that you don't have to bother about the things like converting your requests/request bodies/responses from your POJO domain classes to and from the JSON needed by Elasticsearch. You just use the methods defined in the ElasticsearchOperations class which is implemented by the *Template classes.
Or going one abstraction layer up, use the Repository interfaces the all the Spring Data modules provide to store and search/retrieve your data.
Firstly, This is a very broad question. Not sure if it suits the SO guidelines.
But my two cents:
High Level Client uses Low Level client which does provide connection pooling
High Level client manages the marshalling and unmarshalling of the Elastisearch query body and response, so it might be easier to work using the APIs.
On the other hand, if you are familiar with the Elasticsearch querying by providing the JSON body then you might find it a bit difficult to translate between the JSON body and the Java classes used for creating the query (i.e when you are using Kibana console or other REST API tools)
I generally overcome this by logging the query generated by the Java API so that I can use it with Kibana console or other REST API tools.
Regarding which one is efficient- the library will not matter that much to affect the response times.
If you want to use Spring Reactive features and make use of WebClient, ES Libraries do provide support for Async search.
Update:
Please check the answer by Wim Van den Brande below. He has mentioned a very valid point of using Transport Client which has been deprecated over REST API.
So it would be interesting to see how RestTemplate or Spring Data ElasticSearch will update their API to replace TransportClient.
One important remark and caveat with regards to the usage of Spring Data Elasticsearch. Currently, Spring Data Elasticsearch doesn't support the communication by the High Level REST Client API. They are using the transport client. Please note, the TransportClient is deprecated as of Elasticsearch 7.0.0 and is expected to be removed in Elasticsearch 8.0!!!
FYI, this statement has been confirmed already by another post: Elasticsearch Rest Client with Spring Data Elasticsearch
I am working on a GraphQL with Java, my concerns is will GraphQL supports oracle as Database engine or is it only supports document based databases like Mongo and Graph DataBases?
is it only supports document based databases like Mongo and Graph
DataBases?
No. GraphQL is not another NoSQL. It has nothing to do with what database you use. You can use whatever you like or even calling another web service to store/get data.
When using the an instance of the DataServiceContext class to materialise objects from an odata endpoint where the endpoint exposes some custom annotations, how does one get hold of the annotations data. I can't see any obvious extensibility points.
Custom annotations aren't exposed as a first-class concept on the DataServiceContext, but you can access them by hooking into the client response processing pipeline. This code will run after every entity is finished being read:
context.Configurations.ResponsePipeline.OnEntryEnded(
entryArgs => DoSomething(entryArgs.Entry.InstanceAnnotations));
Internally, the WCF Data Services Client uses a lower-level library called ODataLib (aka Microsoft.Data.OData on NuGet). The response and request pipelines allow you to dip into that lower level to get extra information when you need it, but you still get all the conveniences of using the full-fledged WCF Data Services client library. The classes like ODataEntry, ODataFeed, etc. that you work with on the processing pipelines are all part of the ODataLib API.