Share fragments between client and server - graphql

I've got a bunch of graphQL fragments set up in my React/Apollo app, but I really need to access them on my Node server.
For example, in my client I'm attempting to do this query, to get all relevant Person and Company entities:
query GET_REPORTING_CLIENTS{
reportingClients{
people {
...PersonFragment
}
companies {
...CompanyFragment
}
}
}
Now, I can't just pass info into the queries on my server, cause context.db.query.person obviously won't have a key for 'people'.
Ideally, I'd be able to go:
context.db.query.person({
where: (query details)
}, PersonFragment)
...but that doesn't work cause the server doesn't have the fragment. At the moment I'm getting around that by copy-pasting huge blocks of graphQL from the client to the app, but it's a really poor solution.
Is there an answer, or does everything just need to double up?

I recommend using yarn workspaces, especially if you plan to make a mobile app. You can package chunks of code where it can be shared between the frontend and backend applications.
https://yarnpkg.com/en/docs/workspaces

Related

VueJS SPA dynamic baseURL for axios

I've searched and searched and can't seem to find a pattern for this. I'd consider myself an intermediate Vue dev, however, the backend is my strong suit. I'm working on an app that will be white-labeled by resellers. While it's possible to have multiple builds, avoiding that would be ideal. The setup is a stand-alone vue-cli SPA connecting to a Laravel api backend and using the Sanctum auth package. So I need calls to the same domain. The issue: resellers will be on their own domain. The ask: Is there a pattern/solution for dynamically loading configs (mainly baseURL) for different domains (other items would by theme/stylesheet). Currently I have a few typical entries:
i.e. axios.defaults.baseURL = process.env.VUE_APP_API_BASE_URL
Basically, based on the domain the site is being served on, I'd like a dynamic/runtime config. I feel like this has been solved, but I can't seem to use the right search terms for some direction, so anything is helpful. I've tried a few things:
1) Parsing in js, but can't seem to get it to run early enough in the process to take effect? It seems to work, but I can't get it to "click"
2) Hit a public API endpoint with the current domain and get the config. Again, can implement, but can't seem to get it to inject into the Vue side correctly?
Any resources, pattern references or general guidance would be much appreciative to avoid maintaining multiple builds merely for a few variables. That said, I don't think there's much overhead in any of this, but also open to telling my I'm wrong and need multiple builds.
End Result
url visited is https://mydomaincom
then baseURL = https://api.mydomiancom
url visited https://resellerdomaincom
then baseURL=https://api.resellerdomaincom
I don't think there is a common pattern to solve your problem - I haven't found anything on the net.
The best software design solution could be the following:
have a single back-end
distribute only the client to your customers/resellers
Obviously the back end could see the domain of the application from which the request comes and manage the logic accordingly.
Good luck with your project.
Honestly how the question is put it's not really clear to me. Although my usual pattern is to:
Create an axios instance like so:
export const axiosInstance = axios.create({
// ...configs
baseURL: process.env.VUE_APP_URL_YOU_WOULD_LIKE_TO_HIT
})
and then whenever I make a request to some api, I would use this instance.
EDIT: According to your edit, you can either release the client to each customer, and have a .env file for each and every of them, or you can have a gateway system, where the client axios end point is always the same, hitting always the same server, and then from there the server decides what to ping, based on your own logic

Extract Graphql queries sent by a browser application with Apollo client

I am trying to simplify the process of exporting GraphQL queries sent by my application for documentation purposes. For the record, I want to be able to paste those queries into Postman collections.
Here are my different approaches:
Relying on .graphql files: first it's still very difficult to setup with a full fledged TypeScript + Webpack + Babel setup (using Next.js). Anyway, it does not provide variables, so you only have half the query.
Relying on the network tab. From there, we can copy queries content and import into Postman. Combined with Cypress it could provide an awesome workflow. It works OK, but Apollo Client will send queries as JSON objects, difficult to interpret
I tried to use the "application/graphql" content-type. It's way more readable and available in Postman. BUUUT it is non-standard, and thus not available in Apollo client.
So my question is rather open, but what are the possibilities to enable extracting graphql queries (and variables) sent by my browser and inject them into Postman?
Most promising solution is enabling "application/graphql" client side, or converting the JSON representation back to a string representation. But I could explore another possibility (eg using Apollo Engine as an intermediate)
A way to do this is to use the apollo CLI tool. It includes a client:extract command that extracts all of the GraphQL operations/documents in your application into a file. You run the tool on your JS(X)/TS(X) files and it extracts the GraphQL documents into a file that looks like this (this output is the result of pointing the tool at a single file containing a single query):
{
"version": 2,
"operations": [
{
"signature": "b4f318e6aebcc3631bc88eedef09c6001bb8c310917e97ee6df4a99e17c3c056",
"document": "query BootstrapQuery{user:viewer{__typename delivery_time_1 delivery_time_2 devices{__typename fcm_token id notification{__typename enabled}}has_password id label location name next_delivery_string oauths{__typename email id name picture provider}plan plan_billing_service plan_expires plan_since plan_will_renew profile_picture recovery_email timezone username}}",
"metadata": {
"engineSignature": ""
}
}
]
}
You can then use that file however you want.
In my case, I use this tool to publish an allow-list of operations to Hasura. I'm not sure what you mean by injecting queries into Postman, but I think this tool may provide you with an automated start that would be an improvement over manual copy/pasting.

Different project structures when building an Angular App with separated Spring Backend

I want to build a web page where I separate Database, Backend and Frontend and make then communicate via REST. I got quite confused about how to structure the project(s). As I read, there are the following approaches:
Make different projects: one for the frontend (let's say Angular) and one for the Backend (Spring) including the database connection. These are completely separated from each other and different IDE's might be used.
Build it in one big project but still use REST for communication (see picture below).
Now what I would like to know is what the difference between these two approaches is? I do not know (and do not ask) which one is the better one, but I cannot even make out reasons or effects to pick one above or below the other one.
This is really question about personal preference, but you have them be in completely different projects and at the same time you can put your JS bundles (from ng build) inside of the resources/static folder (of the Spring app) and it'll work perfectly, assuming you want to run them on the same server.
You can set a proxy config to make it easier like:
{
"/api": {
"target": "http://localhost:8080",
"secure": false
}
}
This way whenever you do a rest call with something like Angulars HttpClient, as long as you put /api in front of the url it'll call your spring backend.
Example:
public fetchResource(id: number): Observable<Resource> {
return this.http.get(`/api/resources/${id}`);
}
I prefer to have my client and api in different projects.
Whenever you want to add the JS bundles to your resources/static folder, you can just create an NPM script to do it for you in package.json.

How do I show progress of a long-running server operation (Web API Commanding) in the Lightswitch HTML client?

I have a VS 2013 Lightswitch HTML Client application to which I've added a button that makes a Web API REST post. This basically 'refreshes' the data in the table from the original upstream source. This is all working correctly, but the operation takes a few minutes, and I want to report status to the user as it runs.
Right now, I've tried attaching a simple Refresh when the post returns as follows:
$.post("/api/data/", "Refresh", function (response) {
screen.getData().then(function (newData) { screen.reQuery(); });
});
This doesn't actually seem to do a refresh (screen.reQuery is apparently the wrong call), but the better option would be to instead have the server show progress of this long-running application.
One thought I had would be to have the server call return data in the form of "percent done" in the response as it processes it, but I don't know if this would be delivered to the client piecemeal, nor the best way to display this to the user in Lightswitch.
I'm open to other third-party libraries that might help with this, but I'd like to stick with WebAPI for commanding instead of adding something like SignalR for now, if possible. Thanks!
In general this seems like not the best idea to run operations that takes minutes on the server.
A reasonable alternative is to create a single call, that will in turn create multiple Web Jobs (see Azure Web Jobs for more info). The Web Jobs will be broken to smaller individual tasks, and your html will query the web jobs rather than your Web API.

Dynamic routing performance in node.js with express

I have to create routes of the type /:username in an express application. I can think of two ways for this and i wonder which is more performance optimized. The first is to dynamic serve the route with a call to the db and if the username exists to serve the profile needed. The second one is to create a function, so that when a user is created his profile url is hardcoded into the app and then removed when a user is deleted. This way there won't be calls to the db whenever a url of this type is requested. So the question is what would be the performance problems in the second case, if any and what are the advantages and disadvantages on each case, mainly performance-wise?
Do the first one. I cannot speak towards performance (however I feel that the first will be faster in the long-run), however what happens if your application were to (not saying this would happen) be as popular as Facebook and you then have 1 000 000 000 routes in your express application? Even trying to start your app would get ridiculous.
Databases can handle this, and if you're really worried about it you could keep a cache of the usernames that have already been checked. Add them when you first check it, and delete them if the username is deleted.
It also occurs to me now; will you not have to perform pretty much the same query to get the information to populate the profile anyways? If you are suggesting to create static pages for each profile when the account is created, don't do this. This is what databases were designed for, so it is perfectly safe to use them in this way.
I simply use /:username and i have it below my other routes, so it doesnt' supersede other pages like /login
If there is no user for that username, then I redirect them to the homepage.
Using mongoose, you can do something like this:
//app.js
app.get('/:username', routes.profile.get);
//route handler
User.findOne({ username: req.params.username}, function(err, owner){
if ( !owner ) {
req.flash('error', 'Woops, looks like that account doesn\'t exist.');
res.redirect('/');
}
//do something with owner
});

Resources