I'm creating a web app that uses GraphQL, the requirement is to handle GraphQL operations over WebSocket. Although I managed to achieve this by using subscriptions-transport-ws. however, I'm quite stuck with handling file uploads. and somehow I came across with streaming file from client to server using socket.io-stream, but this leads to having 2 separate API for textual data and files. So, I was wondering if there is a way to combine this functionality into GraphQL.
I ran into the exact same problem, and my file sizes were such that converting to base64 wasn't a feasible option. I also didn't want to use a separate library outside of GraphQL because that would require substantial setup changes in my server (to handle both GraphQL and non-GraphQL).
Fortunately, the solution ended up being fairly simple. I created two GraphQL clients on the front-end - one for the majority of my traffic exclusively over WebSockets, and another exclusively over HTTP just for operations that involved file uploads.
Now I can simply specify which client I want depending on whether the operation involves file uploads, without complex changes on my server or impacting the real-time benefits of all of my other queries.
currently, I am also finding a proper solution for this but you can use an alternative approach by converting your file data into base64 and pass that data as a string. This approach can only work for a small size file as string data-type can not store large amounts of data as buffers do.
Related
I am currently working on moving our rest api based go service to gRPC, using protobuf. it's a huge service with a lot of APIs and already in production so i don't want to make so many changes to ruin the existing system.
So i want to use my go models as source of truth, and to generate .proto messages i think i can manage with this - Generate proto file from golang struct
Now my APIs also expect the request and response according to the defined go models, i will change them to use the .proto models for request and response. but when request/response is passed i want to wrap them in my go models and then the rest of the code doesn't need any changes.
In that case if the request is small i can simply copy all the fields in my go model but in case of big requests or nested models it's a big problem.
1) Am i doing this right way ?
2) No, what's the right way ?
3) Yes, how i can copy the big proto messages to go model and vice-versa for response ?
If you want to use the go models as the source of truth, why you do want to use the .proto generated ones for the REST request/response? Is it because you'd like to use proteus service generation (and share the code between REST and gRPC)?
Usually, if you wanted to migrate from REST to gRPC, the most common way probably would be to use grpc-gateway (note that since around 1.10.x you can use it in-process without resorting to the reverse proxy), but this would be "gRPC-first", where you derive REST from that, while it seems you want "REST- first", since you have your REST APIs already in production. In fact for this reason grpc-gateway probably wouldn't be totally suitable, because it could generate slightly different endpoints from your existing ones. It depends on how much can you afford to break backward compatibility (maybe you could generate a "v2" set of APIs and keep the old "v1" around for a little while, giving time to existing clients to migrate).
I am working on a CLI in Go that scrapes a webpage to collect the href attributes of all the links on the page into a slice. I want to store this slice in memory for some time so that the scraper is not being called on every execution of the CLI command. Ideally, the scraper would only be called after the cache expires or the user provides some sort of --update flag.
I came across the library go-cache and other similar libraries, but from what I could tell they only work for something that is continuously running, like a server.
I thought about writing the links to a file, but then how would I expire the results after a specific duration? Would it make sense to create a small server in the background that shuts down after a while in order to use a library like go-cache? Any help is appreciated.
There are two main approaches in these scenarios:
Create a daemon, service or background application that acts as your data repository. You can run it as an HTTP server / RPC server depending on your requirements. Your CLI application then interacts with this daemon as required;
Implement a persistence mechanism that will allow data to be written and read across multiple CLI application executions. You may use normal text files, databases or even an implementation of golang's encoding/gob to write and read your slice (a map would probably be better) to and from a binary file.
You can timestamp entries and simply remove them after their ttl expires by explicitly deleting them, or by simply not rewriting them during subsequent executions, according to the strategy / approach selected above.
The scope and number of examples for such an open ended question is too myriad to post in a single answer and will most likely require multiple specific questions.
Use a database and store as much detail as you can (fetched_at, host, path, title, meta_desc, anchors etc). You'll be able to query over the data later and it will be useful to have it in a structured format. If you don't want to deal with a db dependency you could embed something like boltdb (pure go) or sqlite (cgo).
I am using a react-native app with relay modern.
Currently our app's fetchQuery implementation, just does a fetch on the network (like in https://facebook.github.io/relay/docs/en/network-layer.html),
Although there is a possibility of another local-network layer like https://github.com/relay-tools/relay-local-schema which returns data from a local-db like sqlite/realm.
Is there a way to setup offline-first response from local-network layer, followed by automatic request to real network which also populates the store with fresher data (along with writing to local-db)?
Also should/can they share the same store?
From the requirements of Network.create(), it should return a promise containing the payload, there does not seem a possibility to return multiple values.
Any ideas/help/suggestions are appreciated.
What you trying to achieve its complex, and ill go for the easy approach which is long time cache.
As you might know relay modern uses a local storage and its exact copy of the data you are fetching, you can configure this store cache as per your needs, no cache on mutations.
To understand how this is achieve the best library around to customise Relay Modern or Classic network layer you can find in https://github.com/nodkz/react-relay-network-modern
My recommendation: setup your cache and watch your request.... (you going to love it)
Thinking in Relay,
https://facebook.github.io/relay/docs/en/thinking-in-relay.html
When we do single page application, the webserver basically does only one things, it gives some data when the client asks them (using JSON format for example). So any server side language (php, ror) or tool (apache, ningx) can do it.
But is there a language/tool that works better with this sorts of single page applications that generates lot of small requests that need low latency and sometimes permanent connection (for realtime and push things)?
SocketStream seems like it matches your requirements quite well: "A phenomenally fast real-time web framework for Node.js ... dedicated to creating single-page real time websites."
SocketStream uses WebSockets to get lowest latency for the real-time portion. There are several examples on the site to build from.
If you want a lot of small requests in realtime by pushing data - you should take a look at socket type connections.
Check out Node.js with Socket.io.
If you really want to optimize for speed, you could try implementing a custom HTTP server that just fits your needs, for example with the help of Netty.
It's blazingly fast and has examples for HTTP and WebSocket servers included.
Also, taking a look at GWAN may be worthwile (though I have not tried that one yet).
http://en.wikipedia.org/wiki/Nginx could be appropriate
I'm starting to step into unfamiliar territory with regards to performance improvement and our RIA (Rich Internet Application) built with GWT. For those unfamiliar with GWT, essentially when deployed it's just pure JavaScript. We're interfacing with the server side using a REST-style XML web service via XMLHttpRequest.
Our XML is un-marshalled into JavaScript objects and used within the application to represent the data model behind the interface. When changes occur, the model is updated and marshalled back to XML and sent back to the server.
I've learned the number one rule of performance (in terms of user experience) is to make as few requests as possible. Obviously this brings up the possibility of caching. Caching is great for static data but things get tricky in a multi-user system where data on the server may be changing. Also, use of "Last-Modified" and "If-Modified-Since" requests don't quite do enough since we'd like to avoid unnecessary requests altogether.
I'm trying to figure out if caching data in the browser is even right for us before researching the approaches. I hope someone has tread this path before. I'm looking for similar approaches, lessons learned, things to avoid, etc.
I'm happy to provide more specific info if needed...
For GWT, if performance matters that much to you, you get better performance by sending all the data you need in a single request, instead of querying multiple small data. I would recommend against client-side data caching as there are lots of issues like keeping the data in sync with the database.
Besides, you already have a good advantage with GWT over traditional html apps. Unless you are dealing with special data (eg: does not become stale too quickly - implies mostly-read queries) I found out that there is no special need for caching. You are better off doing a service-layer caching, since most of the time should come of server-side processing.
If you can provide more details about the nature of the app, maybe some different conclusions can be taken.