I'm having multiple downloads in a swf, downloading external data.
Can I limit the download speed of one download thread or prioritize one?
Flash will download them in the order you ask for them. If you need one first, then ask for that one first.
When running within a browser it is actually the browser that makes the download on behalf of the client container (the client merely requests that the browser does the download). A lot of browsers have restrictions on concurrent file downloads anyway, especially in the same domain, and that is not something you will be able to circumvent with the in-built classes.
My guess is that you could roll your own (or use an existing client as a base) using Socket instead of URLLoader. This would be a large amount of work though - the browser does a lot of free work for you (SSL, cookies from the current session, gzip, keep-alives etc).
Related
I was wondering whether it is possible to host a websocket connection within the boundaries of a serviceworker.js in order to receive notifications while my PWA is closed.
Given the documentation, the regular Push API is the proposed and go-to solution here, but I'm interested whether this is also possible via WebSockets, because my application would be way nicer to work with due to existing libraries for the programming language I use.
Also, websockets would give me an easy way of knowing whether or when my users are online / offline.
So, is this possible in a serviceworker.js, or would it lose the connection under certain circumstances?
Short answer, no.
The service worker is a somehow a thread of a web page. A web page (usually) lives in a tab of a browser (a PWA is also managed by a browser engine). Nowadays browsers are gearing towards performance and a lower memory footprint (think mobile). So the browser will kill (or sleep) your page as soon as it thinks it can because you're not using a page you're not viewing. Yes, a WebSocket will give you some priority against normal pages, but not that much.
Then there is the OS running the browser. Its main mission is to manage resources, like CPU cycles and RAM memory. And yes, it'll also try to kill you sooner than later.
I have a hosted webserver with http/2 (medium fast) and additionally I have a space on a fast CDN-Server with only http/1.1.
Is it recommended to load some ressources from the CDN or should I use only the webserver because of http/2?
Loading too many recources from the CDN could be a bottleneck due to http/1.1?
Would be kind to get some hints...
You need to test. It really depends on your app, your users and your servers.
Under HTTP/1.1 you are limited to 6 connections to a domain. So hosting content on a separate domain (e.g. static.example.com) or loading from a CDN was a way to increase that limit beyond 6. These separate domains are also often cookie-less as they are on separate domains which is good for performance and security. And finally if loading jQuery from code.jquery.com then you might benefit from the user already having downloaded it for another site so save that download completely (though with the number of versions of libraries and CDNs the chance of having a commonly used library already downloaded and in the browser cache is questionable in my opinion).
However separate domains requires setting up a separate connection. Which means a DNS lookup, a TCP connection and usually an HTTPS handshake too. This all takes time and especially if downloading just one asset (e.g. jQuery) then those can often eat up any benefits from having the assets hosted on a separate site! This is in fact why browsers limit the connections to 6 - there was a diminishing rate of return in increasing it beyond that. I've questioned the value of sharded domains for a while because of this and people shouldn't just assume that they will be faster.
HTTP/2 aims to solve the need for separate domains (aka sharded domains) by removing the need for separate connections by allowing multiplexing, thereby effectively removing the limit of 6 "connections", but without the downsides of separate connections. They also allow HTTP header compression, reducing the performance downside to sending large cookies back and forth.
So in that sense I would recommended just serving everything from your local server. Not everyone will be on HTTP/2 of course but the support is incredible strong so most users should.
However, the other benefit of a CDN is that they are usually globally distributed. So a user on the other side of the world can connect to a local CDN server, rather than come all the way back to your server. This helps with connection time (as TCP handshake and HTTPS handshake is based on shorter distances) and content can also be cached there. Though if the CDN has to refer back to the origin server for a lot of content then there is still a lag (though the benefits for the TCP and HTTPS setup are still there).
So in that sense I would advise to use a CDN. However I would say put all the content through this CDN rather than just some of it as you are suggesting, but you are right HTTP/1.1 could limit the usefulness of that. That's weird those as most commercial CDNs support HTTP/2, and you also say you have a "CDN server" (rather than a network of servers - plural) so maybe you mean a static domain, rather than a true CDN?
Either way it all comes down to testing as, as stated at the beginning of this answer it really depends on your app, your users and your servers and there is no one true, definite answer here.
Hopefully that gives you some idea of the things to consider. If you want to know more, because Stack Overflow really isn't the place for some of this and this answer is already long enough, then I've just written a book which spends large parts discussing all this: https://www.manning.com/books/http2-in-action
I have a network connection where I pay per megabyte, so I'm interested in reducing my bandwidth usage as far as possible while still having a reasonably good browsing experience. I use this wonderful extension (https://bandwidth-hero.com/). This extension runs a image-compression proxy on my heroku account that accepts images URLs, and returns a low-quality version of those images.This reduces bandwidth usage by 30-40% when images are loaded.
To further reduce usage, I typically browse with both JavaScript and images disabled (there are various extensions for doing this in firefox/firefox-esr/google-chrome). This has an added bonus of blocking most ads (since they usually need JavaScript to run).
For daily browsing, the most efficient solution is using a text-mode browser in a virtual console such as elinks/lynx/links2 running over ssh (with zlib compression) on a VPS server. But sometimes using JavaScript becomes necessary, as sites will not render without it .Elinks is the only text-mode browser that even tries to support JavaScript, and even that support is quite rudimentary. When I have to come back to using firefox/chrome, I find my bandwidth usage shooting up. I would like to avoid this.
I find that bandwidth is used partially to get the 'raw' html files of the sites I'm browsing, but more often for the associated .js/.css files. These are typically highly compressible. On my local workstation, html+css+javascript files typically compress by a factor of more than 10x when using lzma(2) compression.
It seems to me that one way to do drastically reduce bandwidth consumption would be to use the same template as the bandwidth-hero extension, i.e. run a compression proxy either on a vps or on my heroku account but do so for text content (.html/.js/.css).
Ideally, I would like to run a compression proxy on my local machine. When I open a site (say www.stackoverflow.com), the browser should send a request to this local proxy. This local proxy then sends a request to a back-end running on heroku/vps. The heroku/vps back-end actually fetches all the content, and compresses it (lzma/bzip/gzip). The compressed content is sent back to my local proxy. The local proxy decompresses the content and finally gives it to the browser.
There is something like this mentioned in this answer (https://stackoverflow.com/a/42505732/10690958) for node.js . I am thinking of the same for python.
From what google searches show, HTTP can "automatically" ask for gzip versions of pages. But does this also apply for the associated files that are loaded by JavaScript, and for the css files? Perhaps, what I am thinking about is already implemented by default ?
Any pointers would be welcome. I was thinking of writing a local proxy in python,as I am reasonably fluent in it. But I know little about heroku or the intricacies of HTTP.
thanks.
Update: I found a possible solution here https://github.com/barnacs/compy
which does almost exactly what I need (minify+compress with brotli/gzip+transcode jpeg/gif/png). It uses go instead of python, but that does not really matter. It also has a docker image here https://hub.docker.com/r/andrewgaul/compy/ . Since I'm not very familiar with heroku, I cant figure out how to use this to run the compression proxy service on my account. The heroku docs also weren't of much help to me. Any pointers would be welcome.
I am making a lot of _all_docs requests to CouchDB's HTTP server. One thing I'm realizing is that the data is not compressed, so this results in large file sizes. Even by using limit and skip, the files can sometimes be 10MB each. That doesn't cause any problems for my app, but it does mean that if a connection to our CouchDB server is slower than our office connection, it will go rather slow.
Is there any way I can enable HTTP compression? I am not referring to attachments - just the JSON files.
Also, I am using Windows Server - not Linux/Unix.
Thanks!
There is no support in CouchDB directly, but it has been requested. (so voice your support there if you want this included)
That being said, there are a number of options you have. First, you can set up nginx as a reverse proxy and allow it to compress (and possibly cache) responses for you. After a quick search, I found this plugin that you install in CouchDB directly.
Another thing is that CouchDB does a pretty solid job of allowing clients to cache reliably. You can leverage this to prevent repeatedly downloading the same large resource.
We're currently doing optimizations to our web project when our lead told us to push the use of CDNs for external libraries as opposed to including them into a compile+compress process and shipping them off a cache-enabled nginx setup.
His assumption is that if the user has visits example.com which uses a CDN'ed version of jQuery, the jQuery is cached that time. If the user happens to visit example2.com and happen to use the same CDN'ed jQuery, the jQuery will be loaded from cache instead of over the network.
So my question is: Do domains actually share their cache?
I argued that even if it is possible the browser does share cache, the problem is that we are running on the assumption that the previous sites use the same exact CDN'ed file from the same exact CDN. What are the chances of running into a user browsing through a site using the same CDN'ed file? He said to use the largest CDN to increase chances.
So the follow-up question would be: If the browser does share cache, is it worth the hassle to optimize based on his assumption?
I have looked up topics about CDNs and I have found nothing about this "shared domain cache" or CDNs being used this way.
Well your lead is right this is basic HTTP.
All you are doing is indicating to the client where it can find the file.
The client then handles sending a request to the CDN in compliance with their caching rules.
But you shouldn't over-use CDNs for libraries either, keep in mind that if you need a specific version of the library, especially older ones, you won't be likely to get much cache hits because of version fragmentation.
For widely used and heavy libraries like jQuery you want the latest version of it is recommended.
If you can take them all from the same CDN all the better (ie: Google's) especially as http2 is coming.
Additionally they save you bandwidth, which can amount to a lot when you have high loads of traffic, and can reduce the load time for users far from your server (Google's is great for this).