we have created some API endpoints that return GeoJson responses to specific calls (Featurecollection). We have stored spatial data in our DB (SQL-Server), we are using EF6 and GeoJson.NET to convert the spatial data into proper GeoJson. This is working fine, most of the time. Sometimes, though, the responses contains over 3000 polygons (postcode areas) and their properties (name etc.), that the response grows to nearly 250Mb. For this we already use Gzip to compress those responses. After compression the response is +/- 75Mb. That is still taking too long to download for some of our clients.
Is it somehow possible to divide the GeoJson response into smaller parts and send that to the client? I know something similar can be done with binary data (like images), there we use chunked transfer. Can we also use that for Json data? If so, does anyone know how?
Firefox console with large API response
Related
I'm building an image gallery with a bulk-upload function, which can produce image arrays of up to 2GB size, which then have to be sent to the laravel backend. I guess it would work if I split up the data manually in the frontend, send the chunks as requests and merge them back together in the backend. Assuming a request size of 5MB, this would produce ~400 Post Requests and I'm not sure how well the server would handle that.
I was wondering if there is a convenient way of sending large requests, maybe with a library, that handles the chunks?
I'm using Inertia but couldn't find a corresponding method to deal with my problem. Maybe Axios has such functionality?
I am developing a component that will provide a GET REST endpoint that will return a large (up to 2MB) JSON array of data. I will be using REDIS to cache the JSON array and the REST endpoint is implemented using a Web API 2 project.
I assumed that I could just return the data in the Response Stream so that I don't have to have very large strings in memory, but when I took a look at StackExchange.Redis I couldn't find any methods that return a Stream.
It appears that https://github.com/ctstone/csredis does, but this project looks pretty static now.
Am I missing something, or is there a workaround for this?
We have a "big data" service that receives requests in JSON format and returns results in JSON as well. Requests and especially responses sometime can be quite huge - up to 1GB in size ... We are logging all requests and responses and now I'm building a simple user web interface to search and show all requests we've processed.
My problem is I have a JSON which can be up to 40 levels deep and containing a lot of arrays. How can I give the user ability to drill down and explore the content?
For what it's worth the users have the latest stable version of Chrome and 64GB of RAM.
I was reading Google performance document about HTTP caching at https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching, This document says that we should use ETags when possible. I am using ASP.NET Web Api 2.2. I am now trying to implement ETag in all of my public apis. I am thinking to implement ETag using MD5. I mean, I will hash the json response on each request using MD5. Is there any performance hit of using MD5.calculateHash on each request? My json response size are not too big(within the range of 1 to 20KB).
Yes there will be a performance hit. Calculating a hash takes time. However, you may find that the cost of calculating that hash is not significant in comparison to the performance saved by transferring unchanged bytes over the wire to the client.
However, there is no guarantee that you will get a perf improvement with Etags. It depends on many things. Are you going to regenerate the Etag on the server on every request to compare it with the incoming request? Or are you going to create a cache of etag values and invalidate them when the resource changes?
If you are going to regenerate the etag on every request then it is possible that the time spent pulling data from the database and formatting the representation will be significantly higher than the time it takes to send a few bytes over the wire. Especially if the entire representation can fit in a single network packet.
The key here is to ask if you really need the performance gain of Etags and is it worth the cost of doing the implementation. Setting cache control headers to enable client side private caching may give you all the benefits that you need without having to implement etags.
I have a number of posts that go into this subject in more detail:
http://www.bizcoder.com/using-etags-and-last-modified-headers-to-improve-performance-with-http-conditional-requests
http://bizcoder.com/implementing-conditional-request-handling-for-your-api
I have a web page which, upon loading, needs to do a lot of JSON fetches from the server to populate various things dynamically. In particular, it updates parts of a large-ish data structure from which I derive a graphical representation of the data.
So it works great in Chrome; however, Safari and Firefox appear to suffer somewhat. Upon the querying of the numerous JSON requests, the browsers become sluggish and unusable. I am under the assumption that this is due to the rather expensive iteration of said data structure. Is this a valid assumption?
How can I mitigate this without changing the query language so that it's a single fetch?
I was thinking of applying a queue that could limit the number of concurrent Ajax queries (and hence also limit the number of concurrent updates to the data structure)... Any thoughts? Useful pointers? Other suggestions?
In browser-side JS, create a wrapper around jQuery.post() (or whichever method you are using)
that appends the requests to a queue.
Also create a function 'queue_send' that will actually call jQuery.post() passing the entire queue structure.
On server create a proxy function called 'queue_receive' that replays the JSON to your server interfaces as though it came from the browser, collects the results into a single response, sends back to browser.
Browser-side queue_send_success() (success handler for queue_send) must decode this response and populate your data structure.
With this, you should be able to reduce your initialization traffic to one actual request, and maybe consolidate some other requests on your website as well.
in particular, it updates parts of a largish data structure from which i derive a graphical representation of the data.
I'd try:
Queuing responses as they come in, then update the structure once
Hiding the representation invisible until the responses are in
Magicianeer's answer is also good - but I'm not sure if it fits your definition of "without changing the query language so that it's a single fetch" - it would avoid re-engineering existing logic.