I've a Puppeteer application running on Heroku. The buildpacks I'm using are:
https://github.com/jontewks/puppeteer-heroku-buildpack
https://github.com/mentimeter/heroku-buildpack-nodejs
This application originally was within Herokus 500MB buildpack limit (originally 434MB), but recently expanded to 601.1MB. I assume this is due to change in size of Chromium.
I've updated my app, so that Chromium is included via buildpack:
https://buildpack-registry.s3.amazonaws.com/buildpacks/heroku/google-chrome.tgz
and adding:
PUPPETEER_EXECUTABLE_PATH="google-chrome"
PUPPETEER_SKIP_DOWNLOAD=true
to config flags - this reduced the size of slug down to 476 MB.
I was not satisfied - this is too close to acceptable application limit. I've had a look at Playwright and set it up.
Buildpacks I'm using are:
https://github.com/mxschmitt/heroku-playwright-buildpack.git
https://github.com/mentimeter/heroku-buildpack-nodejs
with
PLAYWRIGHT_BUILDPACK_BROWSERS=chromium
and playwright-chromium included in package.json.
The size of slug - 474 MB.
I assume here that in Puppeteer case size of dependencies might cause the slug to exceed beyond 500MB, while in Playwright case Chromium might be an issue, as it's part of slug.
My application, written in node, uses headless browser to render PDF. While using different technologies is an option, I'd rather at this point not rewrite this setup.
As such the question is - how can I prevent my application again exceeding 500MB Heroku limits?
I solved it by downgrading playwright-chromium to version 1.9.1
Related
I have recently published a webapp to Heroku built using the Plotly Dash libary. The webapp is dependent on uploading files using Dash Core Components (dcc) dcc.Upload and then saving the files using dcc.Store. This works perfectly on the localhost, however, when trying to upload files to the live version hosted by Heroku, the app.callback depending on the uploaded files won't fire.
Is there any issues with using dcc.Upload or dcc.Storeon Heroku? I haven't found any related issues on forums. The files are not large (< 1 MB), and as I said, it all works on the localhost.
Since it's not on my localhost I'm having problems with troubleshooting. Is there any easy way to troubleshoot a live webapp on Heroku?
It is possible that the user through which your app is running do not have write permissions to the directory in which you are saving the file after upload.
Try checking the permissions on the directory. If still not working, please share the errors you are getting. That would be easier to share solutions.
After some digging I found that the abrupt shortcoming of the callbacks were due to a timeout error with the code H12 from Heroku. H12 states that any call without a response wilthin 30 seconds is terminated. The solution was therefore to reduce the callbacks into smaller components so that no function exceeded the 30 second limit.
I currently want to deploy a deep learning REST API using Flask on Heroku. The weights (Its a pre-trained BERT model) are stored here*as a .zip file. Is there a way I can directly deploy these?
From what I currently understand I have to have these uploaded on Github/S3. That's a bit of a hassle and seems pointless since they are already hosted. Do let me know!
Generally you can write a bash script that unzips the content and then you execute your program. However...
Time Concern: Unpacking costs time. And the free tier heroku workers only work for roughly a day before being forcefully restarted. If you are operating a web dyno the restarts will be even more frequent and if it takes too long to boot up the process fails (60 seconds to bind to $PORT)
Size Concern: That zip file is 386 MB big and when unpacked liklier to be even bigger.
Heroku has a slug size limit of 500 MB see: https://devcenter.heroku.com/changelog-items/1145
Once the zip file is unpacked you will be over the limit. The zip file itself + its unpacked content is well over 500 MB. You need to pre-unpack it and make sure the files are less than 500 MB. But given that the data is zipped already 386 MB and unpacked it will be bigger. Furthermore you will rely on some buildpacks (python, javascript, ...) that and processing it will take memory. You will go well over 500 MB.
Which means: You will need to pay for Heroku services or look for a different hosting provider.
On my Heroku app, I deploy multiple times a day. With every release, the Heroku "slug" keeps getting bigger even with minor code changes. This is the kind of message I see in the build log:
Warning: Your slug size (446 MB) exceeds our soft limit (300 MB) which may affect boot time.
The prior build was 444 MB, the one before 441, etc.
With every release it gets bigger, until it reaches Heroku's hard limit of 500 MB and then I need to clear the build cache manually.
Why is the build cache getting bigger for minor code changes? How can I prevent it from reaching the hard limit of 500 MB, which breaks my automated deployments?
Have you tried downloading the slugs for two builds and comparing the contents? You could use the slugs CLI plugin to download them and see what extra files are clogging things up: https://github.com/heroku/heroku-slugs
I have angular2 project created from seed and I've added several angular components on my initial page. Some of them load images - which is relatively slow, but the actual problem is:
I have a bundle which is big (~1mb) and I am currently working on it to make it smaller following a guide on the subject.
The initial load makes just a few requests, loads the bundle (usually ~ 3 seconds) and waits to the Angular application to bootstrap (~1.4 seconds). After that it starts loading all components and load their resources (fonts, images, etc.). Here is how the request looks like:
I am afraid even after I reduce the bundle size the application would still be bootstrapping for 1.5 seconds without making any requests and then again wait a ~1 seconds for the resources of the components to load. That I assume will lead to about 3+ seconds of load time. With my app being relatively simple I found this not acceptable.
Q1: Is there a way to load the component resources earlier and just reference them on component load?
Q2: Is there a way to make the application bootstrap faster?
I'm open to other suggestions too :)
Edit:
I am using AoT compilation, provided with the seed and I have taken steps to lower the size of the app.js file. The problem remains with the gap between the end of app.js downloaded and the first component's resources calls.
UPDATE (2016-12-19):
What I did (for now) is only on the server:
Enabled the HTTP2 support which resulted in major speed improvement.
Enabled GZIP which reduced the size of the JS by more than 5 times.
Those server configurations were trivial on NGINX and Apache so its worth giving them a try.
Now although the site loads a lot faster those changes don't solve the initial problems (Problem 1 and Problem 2). Hence I am looking on some other approaches that you may also want to follow if you are in my spot:
Prerendering with Gulp
Prerendering in Amazon
AoT vs JIT compilation - some insights.
UPDATE (2018-06-11):
Here are some materials that helped me boosting the initial load:
Angular Performance Workshop - Manfred Steyer
Angular — Faster performance and better user experience with Lazy Loading - by Ashwin Sureshkumar
In my case the Lazy Loading plays the big role.
Q2: You can make the application bootstrap faster and decrease the bundle size by implementing Ahead of Time compilation: https://angular.io/docs/ts/latest/cookbook/aot-compiler.html
Just like it sounds, the templates are precompiled and all the scripts generated beforehand which reduces processing time after the initial load. Additionally, the Angular 2 compiler is not included in your bundle which should cut down on the bundle size itself by a large amount.
Q1: There is lazy loading support of components but I haven't looked into what it entails, someone else may be able to answer that part for you.
Buy CloudFlare DNS, We were able to reduce load time from 50sec to 4 sec. also able to reduce the refresh speed to around 1 sec.
There is a free version for it, it works fine.
also considering the minifying/bundling of the js, enable gzip compression at server side will decrease the load time.
You have to reduce as much of your main.js package as possible, the more information in main.js the longer it will take to process it. Check your imports in app.module and use lazy loading.
We're looking for an advice concerning usage of url seeds. We use libtorrent to distribute our application's build to the customers. For that purpose we use a single torrent tracker and several web servers all distributing the same file.
On the client side there is a C# application that uses a native dll with libtorrent. Right after the torrent file addition all url seeds are added to it using the torrent_handle::add_url_seed. The torrent is auto managed.
The problem is the speed. Despite that in our test environments this setup shows good speed from time to time, our production client downloads from our 8 url seeds with the speed close to zero (50 kb/s max). When we try to download from the same urls with a browser we get server-limited speed (1 Mb/s and more). Attempting to download with a script that simulates libtorrent's request gives the same high speed. Only notable difference between setups is the seed/peer count: production setup has lots of them (> 50), while the test one has only main seed and one url seed.
What can be the reason for such a behaviour? Is there any libtorrent option that can affect this?
Got an answer from Arvid Norberg here: http://permalink.gmane.org/gmane.network.bit-torrent.libtorrent/4631