Is it possible to create a ReQL query in the front-end then serialize it to JSON, post it to the server where it is unmarshalled and executed?
If you want to write ReQL queries in the front-end, you can use rehtinkdb-websocket-client along with rethinkdb-websocket-server to do this.
Keep in mind that this will mean that anyone will be able to access your server from the browser, which is a huge security risk.
Related
I need to develop my backed application in NodeJS ExpressJS and GraphQL, and I am using the Apollo GraphQL server for this. Now I have to connect this GraphQL to ElasticSearch so that I can directly write the ElasticSearch Queries in Apollo Playground like earlier I was using for GraphQL Queries.
Can someone help me with this Scenario?
There are multiple scenarios for this through object manipulation, some use a dedicated file for elasticsearch and others use the logic directly to resolvers in graphql and then just add the main method in the graphql/nodejs server declaration in order for the initialization to start (index creation etc) (some call it index.ts it depends)
Use objects and single responsibility.
create a frontend observable that looks at an API that API can then take data from the elastic cluster.
The problem as you pointed out is that you use graphql directly, while graphql is mainly to create a layer between front and back, what you do is making the API layer connect directly with the back, so that needs to change through a new object that only exists for the API, no matter what happens to your back this will have to stay the same, that's why graphql is important, it needs to be used on that specific way.
After learning about GraphQL and using it in a few projects, I finally wanted to give Prisma a go. It promises to eliminate the need for a database and it generates a GraphQL client and a working database from the GraphQL Schema. So far so good.
But my question is: A GraphQL client to me really only seems useful for a client (prevent overfetching, speed up pages, React integrations, ...). Prisma however does not eliminate the need for business logic, and so one would end up using the generated client library in Node.js, just to reexport a lot of the functionality in yet another GraphQL server to the actual client.
Why should I prefer Prisma over a custom database solution? Is there a thought behind having to re-expose a lot of endpoints to the actual client?
I work at Prisma and would love to clarify this!
Here's a quick note upfront: Prisma is not a "GraphQL-as-a-Service" tool (in the way that Graphcool, AppSync or Hasura are). The Prisma client is not a "GraphQL client", it's a database client (similar to an ORM). So, the reason for not using the Prisma client on the frontend is the same as for why you wouldn't use an ORM or connect to the DB directly from the frontend.
It promises to eliminate the need for a database and it generates a GraphQL client and a working database from the GraphQL Schema. So far so good.
I'm really curious to hear where exactly you got this perception from! We're well aware that we need to improve our communication about the value that Prisma provides and how it works. What you've formulated there is an extremely common misconception about Prisma that we want to prevent in the future. We're actually planning to publish a blog post about this exact topic next week, hopefully that will clarify a lot.
To pick up the concrete points:
Prisma doesn't eliminate the need for a database. Similar to an ORM, the Prisma client used to simplify database access. It also makes database migrations easier with a declarative data modelling and migrations approach (we're actually currently working on large improvements to our migration system, you can find the RFC for it here).
Another major benefit of Prisma is the upcoming Prisma Admin, a data management tool. The first preview for that will be available next week.
Even I had similar questions when I started learning graphql. This is what I learned and realised after using it.
Prisma acts as a proxy for your database providing you with a ready
to use GraphQL API that allows you to filter and sort data along with
some custom types like DateTime which are not a part of graphql and
you'd have to otherwise implement yourself. It's not a GraphQL server. Just a
layer between your database and backend server like an ORM.
It covers almost all the possible usecases that you might have from a
data model with all the CRUD operations pre-defined in a schema
along with subscriptions, so you don't have to do all that stuff
and focus more on your business logic side of things.
Also it removes the dependency of you writing different queries for
different databases like Sql or MongoDb acting as a layer to
transform it's query language to actual database queries.
You can use the API(graphql) server to expose only the desired schema
to the client rather than everything. Since graphql queries can get
highly nested, it may be difficult and tricky to implement that which
may also lead to performance issues which is not the case in Prisma as it handles everything itself.
You can check out this article for more info.
I started to use graphQl with react relay. And I followed some tutorials and I can able to get and post with the help of mutations and queries. Everything works fine but my question here is,
Where qraphql is saving the data and fetching that for us
for example: If I get data from database mean's I can go through into particular DB/ TABLE. Likewise, i want to know where graphql is storing the data.
i searched many sites, They are telling how to use qraphql but I cant able to find an answer to my question. I need clarification in this area. Can someone help me out with this.
GraphQL is a query language for your API, and a server-side runtime for executing queries by using a type system you define for your data. GraphQL isn't tied to any specific database or storage engine and is instead backed by your existing code and data.
You can connect any database using GraphQL.
As I understand you are trying the mutation and queries with some hosted engines.
Please go through this reference and set up the GraphQL engine on your side.
GraphQL, unlike a database level query languages like SQL, is an application level query language. It's up to programmer to create necessary logic - in most server implementations realized by using resolver functions - to make a domain described by GraphQL Schema a reality. This includes any form of persistence.
The GraphQL is using localStorage provided by your browser by default if there is no other storage are provided.
I am learning elasticsearch. I wanted to know how safe (in terms of access control & validating user access) it is to access ES server directly from JavaScript API rather than accessing it through your backend ? Will it be a safe to access ES directly from Javascript API ?
Depends on what you mean by "safe".
If you mean "safe to expose to the internet", then no, definitely not, as there isn't any access control and anyone will be able to insert data or even drop all the indexes.
This discussion gives a good overview of the issue. Relevant section:
Just as you would not expose a database directly to the Internet and let users send arbitrary SQL, you should not expose Elasticsearch to the world of untrusted users without sanitizing the input. Specifically, these are the problems we want to prevent:
Exposing private data. This entails limiting the searches to certain indexes, and/or applying filters to the searches.
Restricting who can update what.
Preventing expensive requests that can overwhelm or crash nodes and/or the entire cluster.
Preventing arbitrary code execution through dynamic scripts.
Its most certainly possible, and it can be "safe" if say you're using it as an internal tool behind some kind of authentication. In general, no, its not secure. Elasticsearch API can create, delete, update, and search, which is not something you want to give a client access to or they could essentially do any or all of these things.
You could, in theory, create role based auth with ElasticSearch Shield, but it's far from standard practice. Its not really anymore difficult to implement search on your backend then just have a simple call to return search results.
I'm fairly new to this issue with virtually no experience with Couchbase, so i might just overlooked some easy and direct way to access data contained in it to R Studio. Do you guys know about a solution?
Do i have to process such queries through some other platforms?
Any tips or suggestions would be very much appreciated.
Couchdb has restful http interface so you can query the database directly using that. There is this package for R that should make it easier for you:
http://cran.r-project.org/web/packages/RCurl/index.html
Here is an example of a Uri that might be used to query a couch view:
/database/_design/designdocname/_view/viewname.
Bear in mind that couch is rather unusual in that getting getting data out of it requires views to be set up in the database. You may be unable to make certain queries without creating views in the admin interface.
But if you are just querying existing views, an http client is all you need.