Connection is based on `array`,is this a design style guide for design a relay server? - graphql

In connection/arrayconnection.js, It seems all the function is tend to work with array.
For example: offsetToCursor is the only way to generate Cursor. Does this mean its a design pattern i must follow, or imply that i should generate Cursor by myself when using something other than array.If im planning to use Mongodb,should i make the database interface like an static array ?
BTW:
As a newbie to web develop, im a bit confused how to implement a qualified relay server.
Are there some guide for design a graphql-relay server, should i follow all the way in graphql-relay-js, which Database Facebook used with relay-server ? mysql or ?
Im not sure ask this here is appropriate or not,but the topic for graphql-relay-js is rarely on the web.
Thanks a lot, forgive my impolite.
var PREFIX = 'arrayconnection:';
/**
* Creates the cursor string from an offset.
*/
export function offsetToCursor(offset: number): ConnectionCursor {
return base64(PREFIX + offset);
}
Additional question:
Maybe i get some idea from developers.facebook.com/docs/graph-api.
Seems should do an array style cache for pagination lookup( not sure about this).
But graph-api looks a bit different from graphql-relay-js (is graph-api still some part of old restful style?),
What is the relationship between graph-api and graphql-relay-js ? Is graphql-relay-js a common design guide for design a graphql server in facebook?
Thanks a lot! please give me some hints

Connection is a design pattern that your schema may implement if you want Relay to perform efficient pagination. How it gets implemented on the backend is an implementation detail. It may be backed by something array-like, or it may not (think about something like the infinite scrolling news feed on Facebook, which is ranked by a terribly sophisticated backend service: this is clearly not backed by an array). We provide the arrayconnection.js module as a way of demonstrating how this can be done if your data source has that array-like nature. If it does not, or cannot be efficiently converted to it, you are better off implementing something from scratch.
Cursors are opaque identifiers. You could use an array index or some kind of primary key if you are using an array source or a typical database backend (like MySQL), but again the details are implementation-specific and should be chosen to suit your back end. The only requirement is that the cursor should encode whatever information you need on the server to be able to return the next page of results after (or before) that point.
graphql-relay-js is just a collection of modules that provide some helpers for building Relay-compatible GraphQL schemas in JavaScript. The schema provides a uniform interface to your data, but the actual underlying storage can be anything you want to plug into it (a MySQL database, an object in memory, some REST service). For simple examples, look in the examples directory in the Relay repo. As an illustration of how you can put a schema in front of something that is not a traditional database, this is an example of a schema that reads its data out of a Git repo, with the help of indices in Redis and cached data in memcached.
Stay away from developers.facebook.com/docs/graph-api; despite the "graph" in the name this is an entirely different thing and has nothing to do with the GraphQL hierarchical query language that Relay uses.

Related

Tool or code to take a graphQL schema + set of operations and produce a schema subset?

Let's say I've got the following:
A giant GraphQL schema for some service (in .graphql or introspection json format)
A set of operations that my code wants to perform against that query.
I'd really like to generate a "subset schema": just the pieces (types/enums/etc) of that big schema that my service actually uses. Is there a tool or a piece of code that can do this today easily?
The reason I want this is that we want to mock graphql services to write isolated tests of particular microservices, and we want to mock out just the bits we actually use, and keep track of any changes in our schema usage over time.
I never found a solution to this, so eventually I wrote one!
https://github.com/xometry/graphql-code-generator-subset-plugin - this tool is a plugin to the graphql-code-generator tool which will product a schema subset.

Is it possible to change a GraphQL schema at runtime and write resolvers that handle this dynamic schema? (in Java)

I am working with GraphQL (in Java) and I would like to find a way to do the following:
I need the possibility to constantly adapt the GraphQL schema at runtime without restart. In particular I need to be able to add new fields to GraphQL types. Moreover I need the possibility to be able to write resolvers which can handle this dynamic schema.
I do not have example code yet, so just think of the simplest example (one GraphQL type with several fields that can all be of different type).
My problem is that I am quite new in GraphQL and I do not have a lot of experience with it. Of course I looked for a solution on the internet, but I did not find one yet (or just did not notice that I found it due to my lacking experience with GraphQL).
The only interesting discovery I made is this: exposing dynamic schemas with graphql . But I do not understand how this solution works because 1) I do not know how to reload the schema at runtime and 2) I do not know how to write the resolvers so that they can handle that dynamic schema.
So can anybody help me with my problem and/or can answer my questions regarding the link I found?
I am very thankful for every help, no matter how extensive it is. Like I told before, I am quite new in GraphQL. Therefore I would be also very thankful for links to examples (if possible), so that I can understand better.
Thank you very much in advance.
#userongithub0 you may take a look at GraphQL Schema Directives
And specifically on the rest directive
First of all, don't ever try to do that or use only if there're some very strict situations. Here's why:
A schema is like a contract between the front-end & backend & on change, it can lead to instability between both of them very quickly.
If you try to change the schema of GraphQL, it might fail to connect properly with your resolvers & consecutively with your database as well.
Whenever there's is a change in the schema the GraphQL server (the server handler, in general) needs to be restarted (recompile) & it will take time, hence results in high response time.
No matter what language you are using, you should always see it as a red flag. In my opinion, it will be a really bad practice.

Can GraphQL Queries be named, kind of like stored procedures, and reused?

I'm building a Graphene-Django based GraphQL API. One of my colleagues, who is building an Angular client that will use the API, has asked if there's a way to store frequently used queries somehow on the server-side so that he can just call them by name?
I have not yet encountered such functionality so am not sure if it's even possible.
FYI he is using the Apollo Client so maybe such "named" queries is strictly client-side? Here's a page he referred me to: http://dev.apollodata.com/angular2/cache-updates.html
Robert
Excellent question! I think the thing you are looking for is called "persisted queries." The GraphQL spec only outlines
A Type System for a schema
A formal language for queries
How to validate/execute a query against a schema
Beyond that, it is up to the implementation to make specific optimizations. There are a few ways to do persisted queries, and different ones may be more or less helpful for your project.
Storing Queries as a String
Queries can easily be stored as Strings, and the convention is to use *.gql files to do that. Many editors/IDEs will even have syntax highlighting for this. To consume them later, just URL Encode them, and you're all set! Since these strings are "known" you can whitelist the requests on the server if you choose.
const myQuery = `
{
user {
firstName
lastName
}
}
`
const query = `www.myserver.com/query=${urlEncode(myQuery)}`
Persisted Queries
For a more sophisticated approach, you can take queries that are extracted from your project (either from strings or using a build tool), pre-run them and put the result in a DB. This is what Facebook does. There are plenty of tools out there to help you with this, and the Awesome-GraphQL repo is a good place to start looking.
Resources
Check out this blog for more info on Persisted Queries

Streamlining the implementation of a Repository Pattern and SOA

I'm working with Laravel 5 but I think this question can be applied beyond the scope of a single framework or language. The last few days I've been all about writting interfaces and implementations for repositories, and then binding services to the IoC and all that stuff. It feels extremely slow.
If I need a new method in my service, say, Store::getReviews() I must create the relationship in my entity model class (data source, in this case Eloquent) then I must declare the method in the repo interface to make it required for any other implementation, then I must write the actual method in the repo implementation, then I have to create another method on the service that calls on the repo to extract all reviews for the store... (intentional run-on sentence) It feels like too much.
Creating a new model now isn't as simple as extending a base model class anymore. There are so many files I have to write and keep track of. Sometimes I'll get confused as of to where exactly I should put something, or find halfway throught setting up a method that I'm in the wrong class. I also lost Eloquent's query building in the service. Everytime I need something that Eloquent has, I have to implement it in the repo and the service.
The idea behind this architecture is awesome but the actual implementation I am finding extremely tedious. Is there a better, faster way to do things? I feel I'm beeing too messy, even though I put common methods and stuff in abstract classes. There's just too much to write.
I've wrestled with all this stuff as I moved to Laravel 5. That's when I decided to change my approach (it was tough decision). During this process I've come to the following conclusions:
I've decided to drop Eloquent (and the Active Record pattern). I don't even use the query builder. I do use the DB fascade still, as it's handy for things like parameterized query binding, transactions, logging, etc. Developers should know SQL, and if they are required to know it, then why force another layer of abstraction on them (a layer that cannot replace SQL fully or efficiently). And remember, the bridge from the OOP world to the Relational Database world is never going to be pretty. Bear with me, keeping reading...
Because of #1, I switched to Lumen where Eloquent is turned off by default. It's fast, lean, and still does everything I needed and loved in Laravel.
Each query fits in one of two categories (I suppose this is a form of CQRS):
3.1. Repositories (commands): These deal with changing state (writes) and situations where you need to hydrate an object and apply some rules before changing state (sometimes you have to do some reads to make a write) (also sometimes you do bulk writes and hydration may not be efficient, so just create repository methods that do this too). So I have a folder called "Domain" (for Domain Driven Design) and inside are more folders each representing how I think of my business domain. With each entity I have a paired repository. An entity here is a class that is like what others may call a "model", it holds properties and has methods that help me keep the properties valid or do work on them that will be eventually persisted in the repository. The repository is a class with a bunch of methods that represent all the types of querying I need to do that relates to that entity (ie. $repo->save()). The methods may accept a few parameters (to allow for a bit of dynamic query action inside, but not too much) and inside you'll find the raw queries and some code to hydrate the entities. You'll find that repositories typically accept and/or return entities.
3.2. Queries (a.k.a. screens?): I have a folder called "Queries" where I have different classes of methods that inside have raw queries to perform display work. The classes kind of just help for grouping together things but aren't the same as Repositories (ie. they don't do hydrating, writes, return entities, etc.). The goal is to use these for reads and most display purposes.
Don't interface so unnecessarily. Interfaces are good for polymorphic situations where you need them. Situations where you know you will be switching between multiple implementations. They are unneeded extra work when you are working 1:1. Plus, it's easy to take a class and turn it into an interface later. You never want to over optimize prematurely.
Because of #4, you don't need lots of service providers. I think it would be overkill to have a service provider for all my repositories.
If the almost mythological time comes when you want to switch out database engines, then all you have to do is go to two places. The two places mentioned in #3 above. You replace the raw queries inside. This is good, since you have a list of all the persistence methods your app needs. You can tailor each raw query inside those methods to work with the new data-store in the unique way that data-store calls for. The method stays the same but the internal querying gets changed. It is important to remember that the work needed to change out a database will obviously grow as your app grows but the complexity in your app has to go somewhere. Each raw query represents complexity. But you've encapsulated these raw queries, so you've done the best to shield the rest of your app!
I'm successfully using this approach inspired by DDD concepts. Once you are utilizing the repository approach then there is little need to use Eloquent IMHO. And I find I'm not writing extra stuff (as you mention in your question), all while still keeping my app flexible for future changes. Here is another approach from a fellow Artisan (although I don't necessarily agree with using Doctrine ORM). Good Luck and Happy Coding!
Laravel's Eloquent is an Active Record, this technology demands a lot of processing. Domain entities are understood as plain objects, for that purpose try to utilizes Doctrime ORM. I built a facilitator for use Lumen and doctrine ORM follow the link.
https://github.com/davists/Lumen-Doctrine-DDD-Generator
*for acurated perfomance analisys there is cachegrind.
http://kcachegrind.sourceforge.net/html/Home.html

Which one do you prefer for Searching/Reporting DataTable or DTO or Domain Class?

The project currently I am working in requires a lot of searhing/filtering pages. For example I have a comlex search page to get Issues by data,category,unit,...
Issue Domain Class is complex and contains lots of value objects and child objects.
.I am wondering how people deal with Searching/Filtering/Reporting for UI. As far As I know I have 3 options but none of them make me happier.
1.) Send parameters to Repository/DAO to Get DataTable and Bind DataTable to UI Controls.For Example to ASP.NET GridView
DataTable dataTable =issueReportRepository.FindBy(specs);
.....
grid.DataSource=dataTable;
grid.DataBind();
In this option I can simply by pass the Domain Layer and query database for given specs. And I dont have to get fully constructed complex Domain Object. No need for value objects,child objects,.. Get data to displayed in UI in DataTable directly from database and show in the UI.
But If have have to show a calculated field in UI like method return value I have to do this in the DataBase because I don't have fully domain object. I have to duplicate logic and DataTable problems like no intellisense etc...
2.)Send parameters to Repository/DAO to Get DTO and Bind DTO to UI Controls.
IList<IssueDTO> issueDTOs =issueReportRepository.FindBy(specs);
....
grid.DataSource=issueDTOs;
grid.DataBind();
In this option is same as like above but I have to create anemic DTO objects for every search page. Also For different Issue search pages I have to show different parts of the Issue Objects.IssueSearchDTO, CompanyIssueTO,MyIssueDTO....
3.) Send parameters to Real Repository class to get fully constructed Domain Objects.
IList<Issue> issues =issueRepository.FindBy(specs);
//Bind to grid...
I like Domain Driven Design and Patterns. There is no DTO or duplication logic in this option.but in this option I have to create lot's of child and value object that will not shown in the UI.Also it requires lot's ob join to get full domain object and performance cost for needles child objects and value objects.
I don't use any ORM tool Maybe I can implement Lazy Loading by hand for this version but It seems a bit overkill.
Which one do you prefer?Or Am I doing it wrong? Are there any suggestions or better way to do this?
I have a few suggestions, but of course the overall answer is "it depends".
First, you should be using an ORM tool or you should have a very good reason not to be doing so.
Second, implementing Lazy Loading by hand is relatively simple so in the event that you're not going to use an ORM tool, you can simply create properties on your objects that say something like:
private Foo _foo;
public Foo Foo
{
get {
if(_foo == null)
{
_foo = _repository.Get(id);
}
return _foo;
}
}
Third, performance is something that should be considered initially but should not drive you away from an elegant design. I would argue that you should use (3) initially and only deviate from it if its performance is insufficient. This results in writing the least amount of code and having the least duplication in your design.
If performance suffers you can address it easily in the UI layer using Caching and/or in your Domain layer using Lazy Loading. If these both fail to provide acceptable performance, then you can fall back to a DTO approach where you only pass back a lightweight collection of value objects needed.
This is a great question and I wanted to provide my answer as well. I think the technically best answer is to go with option #3. It provides the ability to best describe and organize the data along with scalability for future enhancements to reporting/searching requests.
However while this might be the overall best option, there is a huge cost IMO vs. the other (2) options which are the additional design time for all the classes and relationships needed to support the reporting needs (again under the premise that there is no ORM tool being used).
I struggle with this in a lot of my applications as well and the reality is that #2 is the best compromise between time and design. Now if you were asking about your busniess objects and all their needs there is no question that a fully laid out and properly designed model is important and there is no substitute. However when it comes to reporting and searching this to me is a different animal. #2 provides strongly typed data in the anemic classes and is not as primitive as hardcoded values in DataSets like #1, and still reduces greatly the amount of time needed to complete the design compared to #3.
Ideally I would love to extend my object model to encompass all reporting needs, but sometimes the effort required to do this is so extensive, that creating a separate set of classes just for reporting needs is an easier but still viable option. I actually asked almost this identical question a few years back and was also told that creating another set of classes (essentially DTOs) for reporting needs was not a bad option.
So to wrap it up, #3 is technically the best option, but #2 is probably the most realistic and viable option when considering time and quality together for complex reporting and searching needs.

Resources