I'm trying to provision a client-side swing application using OBR and Felix. It works, but even after pruning many unused bundles, the obr.xml file is still about 1MB.
That file will be downloaded many times, even though it's not that dynamic.
If I could gzip that file, it compresses with a factor >20, less than 50kb remains.
Can Nexus do that for me? Could I use something like:
https://nexus.dexels.com/nexus/content/groups/obr/.meta/obr.gz
instead of:
https://nexus.dexels.com/nexus/content/groups/obr/.meta/obr.xml
I can't find anything about this, and it would make a lot of sense, I think.
I'm using nexus-obr-plugin-2.0.1-SNAPSHOT
You can enable gzip compression for responses by adding the following line to ${nexus_root}/conf/nexus.properties:
enable-restlet-encoder=true
If the felix client sends the "Accept-Encoding: gzip, deflate" header then the response will be compressed.
Related
I have a network connection where I pay per megabyte, so I'm interested in reducing my bandwidth usage as far as possible while still having a reasonably good browsing experience. I use this wonderful extension (https://bandwidth-hero.com/). This extension runs a image-compression proxy on my heroku account that accepts images URLs, and returns a low-quality version of those images.This reduces bandwidth usage by 30-40% when images are loaded.
To further reduce usage, I typically browse with both JavaScript and images disabled (there are various extensions for doing this in firefox/firefox-esr/google-chrome). This has an added bonus of blocking most ads (since they usually need JavaScript to run).
For daily browsing, the most efficient solution is using a text-mode browser in a virtual console such as elinks/lynx/links2 running over ssh (with zlib compression) on a VPS server. But sometimes using JavaScript becomes necessary, as sites will not render without it .Elinks is the only text-mode browser that even tries to support JavaScript, and even that support is quite rudimentary. When I have to come back to using firefox/chrome, I find my bandwidth usage shooting up. I would like to avoid this.
I find that bandwidth is used partially to get the 'raw' html files of the sites I'm browsing, but more often for the associated .js/.css files. These are typically highly compressible. On my local workstation, html+css+javascript files typically compress by a factor of more than 10x when using lzma(2) compression.
It seems to me that one way to do drastically reduce bandwidth consumption would be to use the same template as the bandwidth-hero extension, i.e. run a compression proxy either on a vps or on my heroku account but do so for text content (.html/.js/.css).
Ideally, I would like to run a compression proxy on my local machine. When I open a site (say www.stackoverflow.com), the browser should send a request to this local proxy. This local proxy then sends a request to a back-end running on heroku/vps. The heroku/vps back-end actually fetches all the content, and compresses it (lzma/bzip/gzip). The compressed content is sent back to my local proxy. The local proxy decompresses the content and finally gives it to the browser.
There is something like this mentioned in this answer (https://stackoverflow.com/a/42505732/10690958) for node.js . I am thinking of the same for python.
From what google searches show, HTTP can "automatically" ask for gzip versions of pages. But does this also apply for the associated files that are loaded by JavaScript, and for the css files? Perhaps, what I am thinking about is already implemented by default ?
Any pointers would be welcome. I was thinking of writing a local proxy in python,as I am reasonably fluent in it. But I know little about heroku or the intricacies of HTTP.
thanks.
Update: I found a possible solution here https://github.com/barnacs/compy
which does almost exactly what I need (minify+compress with brotli/gzip+transcode jpeg/gif/png). It uses go instead of python, but that does not really matter. It also has a docker image here https://hub.docker.com/r/andrewgaul/compy/ . Since I'm not very familiar with heroku, I cant figure out how to use this to run the compression proxy service on my account. The heroku docs also weren't of much help to me. Any pointers would be welcome.
My Clojure application is deployed on Heroku along with a minified 600KB+ ClojureScript JS artifact that I'm serving as a static file at /js/main.js.
How do I enable gzip compression on http-kit to reduce the size of my JS artifact over the wire?
Not supported by http-kit, usually you want to use a proxy like Nginx anyways. As you probably don't have that option with Heroku you can do the compression via ring middleware, see this issue for pointers.
I am making a lot of _all_docs requests to CouchDB's HTTP server. One thing I'm realizing is that the data is not compressed, so this results in large file sizes. Even by using limit and skip, the files can sometimes be 10MB each. That doesn't cause any problems for my app, but it does mean that if a connection to our CouchDB server is slower than our office connection, it will go rather slow.
Is there any way I can enable HTTP compression? I am not referring to attachments - just the JSON files.
Also, I am using Windows Server - not Linux/Unix.
Thanks!
There is no support in CouchDB directly, but it has been requested. (so voice your support there if you want this included)
That being said, there are a number of options you have. First, you can set up nginx as a reverse proxy and allow it to compress (and possibly cache) responses for you. After a quick search, I found this plugin that you install in CouchDB directly.
Another thing is that CouchDB does a pretty solid job of allowing clients to cache reliably. You can leverage this to prevent repeatedly downloading the same large resource.
I am working on a web application with dynamic content generated by a servlet running in a JBoss container and static content/load balancing being handled by Apache + mod_jk.
The client end uses jQuery to making AJAX requests to the servlet which processes them and in turn generates large JSON responses.
One thing I noticed is that the original author chose to manually compress the output stream in the servlet using something like below.
gzout = new GZIPOutputStream(response.getOutputStream());
Could this have been handled using mod_deflate on the Apache end? If you can do this, is it considered better practice, can you explain why or why not?
I believe it makes more sense to apply HTTP compression within Apache in your case.
If a server is properly configured to compress responses of this type (application/json, if the server-side code is setting the correct content-type), then it's being wastefully re-compressed after the manual compression anyway.
Also, what happens here if a client that doesn't support gzip makes a request? If you're compressing the response at the server level, it will automatically respond appropriately based on the request's accept-encoding header.
A little additional research shows a couple of good alternatives.
Issue:
There is a network channel that exists between Apache and Jboss. Without compression of any kind on the jboss end you'd have the same latency and bandwidth issues you have between Apache and your clients.
Solutions:
You can use mod_deflate on the Apache and accept uncompressed responses from jboss and compress before delivering to your clients. I could see this making some sense in certain network topologies(Proposed by Dave Ward).
You can apply a Java EE filter. This will filter responses compressing them before they exist the JBoss container. This has the benefit of compression at the JBoss level without a bunch of nasty GZIP related code in your business servlet.
JBoss with by default uses Tomcat as its Servlet engine. If you navigate to $JBOSS_HOME/deploy/jbossweb-tomcat55.sar you can edit the server.xml file to turn 'compression=on' attribute on the HTTP/1.1 connector. This will compress all responses outbound from the container.
The trade-off between 2 and 3 is compressing piece meal for different servlets or compressing all responses.
I'm wondering what is the best pattern to allow large files to be uploaded to a server using Ruby.
I've found Rails and Large, Large file Uploads: Looking at the alternative but it doesn't give any concrete solutions.
I don't want to use Rails since I'm working on a simple upload server that'll run in standalone mode. I'm guessing that Sinatra could be the key but I don't know which web server I should use to run it without raising a Timeout.
I also need this web server to allow simultaneous upload.
UPDATE: By "large files" I mean between 200MB and 5GB.
UPDATE2: Since those files are videos (in my case), I can deal with a max size of 2GB like youtube.
ok i am taking a bit of a strech here but:
if you would use a couchdb as a target for your uploads you would get rid of the timeout problem.
consider the couchdb as some "temp" memory in this example.
so if a downloads finishes you can take the file from the couchdb and do with it whatever you want.
i managed to upload files as big as 9gb over a dsl line into couchdb without any drama.
it may take a bit of reading but i think you could make it work.
couchdb has many rails gems so it plays nice with others ;)
let me know if you wanna go down that rabbit hole so i can give you some more pointers
passenger recommends using a separate apache/nginx module to handle uploads.