I am working on a GraphQL with Java, my concerns is will GraphQL supports oracle as Database engine or is it only supports document based databases like Mongo and Graph DataBases?
is it only supports document based databases like Mongo and Graph
DataBases?
No. GraphQL is not another NoSQL. It has nothing to do with what database you use. You can use whatever you like or even calling another web service to store/get data.
Related
Suppose I am building a system in a microservices architecture. Instead of using Rest, I chose GraphQL. Thus, I have several services that have no controllers but resolvers.
Now I would like to call a method from service2 (microservice) in the service1 (microservice) and get the result from the service2.
Normally, I could use a Hybrid application in NestJs with #MessagePattern. Nevertheless, when using GraphQL, the reference must be to query or mutaion in Service 2's Resolver.
So how with this arrangement to establish a connection between services 1 and 2 using resolvers?
It sounds like what you want is schema stitching. This will allow you to add remote schemas (essentially methods from other services) to your graphQL server.
I'm more familiar with C# implementations of GraphQL e.g. ChilliCream Hot Chocolate which has info on schema stiching here: https://chillicream.com/docs/hotchocolate/distributed-schema/schema-stitching
Since you mentioned NestJs in your question, I'll link this previous stack overflow question in: How to get multiple remote schemas stitched with Nestjs and apollo server
I have rest API through which I am sending and getting massages to the Kafka server using spring-boot. Now I want to save those messages to elasticsearch. How to do it can anyone help?
Actually this is a systematic job in which case, is somehow like setting up a database storage architecture.
TO BE SIMPLE AND SHORT:
First you need to decide which ES version you want to use, because there are some breaking changes between ES 2.x to 7.x. And those differences may affect the way you design the schema of your storage.
Assume you use latest 7.x ES, you will need to create index(es) where you want the data fetched from kafka to be stored into. Checkout https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html
Later you have indexes created, you need to apply and learn some basic knowledge about ES high level rest client and low level rest client. The low level rest client enables you the basic connection to ES cluster via HTTP. And high level rest client apis give you sufficient ways to do ops like documents CRUD, search, aggregations for your data. You can easily find dependencies via maven and use them in your Spring Boot Application. Checkout https://www.elastic.co/guide/en/elasticsearch/client/java-rest/master/java-rest-high.html
I am in a dilemma over to use spring's rest template or elasticsearch's own high/low rest client while searching in es . Does es client provide any advantage like HTTP connection pooling , performance while compared to spring rest template . Which of the two take less time in getting response from the server . Can some one please explain this ?
The biggest advantage of using Spring Data Elasticsearch is that you don't have to bother about the things like converting your requests/request bodies/responses from your POJO domain classes to and from the JSON needed by Elasticsearch. You just use the methods defined in the ElasticsearchOperations class which is implemented by the *Template classes.
Or going one abstraction layer up, use the Repository interfaces the all the Spring Data modules provide to store and search/retrieve your data.
Firstly, This is a very broad question. Not sure if it suits the SO guidelines.
But my two cents:
High Level Client uses Low Level client which does provide connection pooling
High Level client manages the marshalling and unmarshalling of the Elastisearch query body and response, so it might be easier to work using the APIs.
On the other hand, if you are familiar with the Elasticsearch querying by providing the JSON body then you might find it a bit difficult to translate between the JSON body and the Java classes used for creating the query (i.e when you are using Kibana console or other REST API tools)
I generally overcome this by logging the query generated by the Java API so that I can use it with Kibana console or other REST API tools.
Regarding which one is efficient- the library will not matter that much to affect the response times.
If you want to use Spring Reactive features and make use of WebClient, ES Libraries do provide support for Async search.
Update:
Please check the answer by Wim Van den Brande below. He has mentioned a very valid point of using Transport Client which has been deprecated over REST API.
So it would be interesting to see how RestTemplate or Spring Data ElasticSearch will update their API to replace TransportClient.
One important remark and caveat with regards to the usage of Spring Data Elasticsearch. Currently, Spring Data Elasticsearch doesn't support the communication by the High Level REST Client API. They are using the transport client. Please note, the TransportClient is deprecated as of Elasticsearch 7.0.0 and is expected to be removed in Elasticsearch 8.0!!!
FYI, this statement has been confirmed already by another post: Elasticsearch Rest Client with Spring Data Elasticsearch
Airpal currently uses presto client to connect to PrestoDB. However as I understand, it can also use JDBC for this connectivity. Is there any code available for this purpose? Even if it is for connecting to any other database it might be helpful for me. The model for presto client looks a lot different than other models like JDBC etc.
Airpal is using presto client connectivity and also using these objects (mostly for schema and data like Column, QueryResults etc.) internally in its various modules.
One way for providing JDBC connectivity is to move its lowest layer of DB connectivity (executeWith invocations of com.airbnb.airpal.core.execution.QueryCliemt: there is 1 for data and about 6 for metadata) to JDBC query execution. The JDBC results (mostly data and schema) can then be converted to presto client api equivalent objects and rest of the logic in airpal would follow.
Another approach is to rewrite airpal with native JDBC support by moving over to JDBC objects for internal use and communication as well. It looks like a much bigger change.
I am planning to add support for dynamically choosing between presto client or JDBC connectivity. I will use the com.airbnb.airpal.presto.QueryRunner to hold either a presto client session or a JDBC connection accordingly.
Can someone help me understand how to expose the SYS schema objects of a JBoss Teiid Virtual Database when connected via an ODBC-JDBC bridge ?
The client is connecting to ODBC side of the bridge and the JDBC side of it is connecting to the Virtual Database (VDB) running on the JBoss SOA server.
With the current setting only the tables and columns modeled thru the JBoss Studio's Teiid Designer are exposed but not the SYS schema and its underlying objects. Client App is Microstrategy BI application.
You are able to traverse all metadata from all used data sources using native JDBC JAVA API.
I am new to Teiid and had the similar question.
When you create the VDB with JBoss designer you can specify which models will be exposed to the client applications. As a good practice, only View models are exposed and Source models are not. As a result, querying against the System tables of the VDB will only show you the metadata within the View models, which will be a subset of the metadata in the underlying data sources.
Hope this helps.