Why is Heroku build cache getting bigger with every release? - heroku

On my Heroku app, I deploy multiple times a day. With every release, the Heroku "slug" keeps getting bigger even with minor code changes. This is the kind of message I see in the build log:
Warning: Your slug size (446 MB) exceeds our soft limit (300 MB) which may affect boot time.
The prior build was 444 MB, the one before 441, etc.
With every release it gets bigger, until it reaches Heroku's hard limit of 500 MB and then I need to clear the build cache manually.
Why is the build cache getting bigger for minor code changes? How can I prevent it from reaching the hard limit of 500 MB, which breaks my automated deployments?

Have you tried downloading the slugs for two builds and comparing the contents? You could use the slugs CLI plugin to download them and see what extra files are clogging things up: https://github.com/heroku/heroku-slugs

Related

Deploying a hosted deep learning model on Heroku?

I currently want to deploy a deep learning REST API using Flask on Heroku. The weights (Its a pre-trained BERT model) are stored here*as a .zip file. Is there a way I can directly deploy these?
From what I currently understand I have to have these uploaded on Github/S3. That's a bit of a hassle and seems pointless since they are already hosted. Do let me know!
Generally you can write a bash script that unzips the content and then you execute your program. However...
Time Concern: Unpacking costs time. And the free tier heroku workers only work for roughly a day before being forcefully restarted. If you are operating a web dyno the restarts will be even more frequent and if it takes too long to boot up the process fails (60 seconds to bind to $PORT)
Size Concern: That zip file is 386 MB big and when unpacked liklier to be even bigger.
Heroku has a slug size limit of 500 MB see: https://devcenter.heroku.com/changelog-items/1145
Once the zip file is unpacked you will be over the limit. The zip file itself + its unpacked content is well over 500 MB. You need to pre-unpack it and make sure the files are less than 500 MB. But given that the data is zipped already 386 MB and unpacked it will be bigger. Furthermore you will rely on some buildpacks (python, javascript, ...) that and processing it will take memory. You will go well over 500 MB.
Which means: You will need to pay for Heroku services or look for a different hosting provider.

Is max storage capacity of Heroku free apps more than 3GB?

My Heroku app has more that 3gb storage capacity and i want to is it true?
enter image description here
based of Heroku site it must have 500MB as you could see here :
Heroku has certain soft and hard limits in using its service. Hard
limits are automatically enforced by the Service. Soft limits are
consumable resources that you agree not to exceed.
Network Bandwidth: 2TB/month - Soft
Shared DB processing: Max 200msec per second CPU time - Soft
Dyno RAM usage: Determined by Dyno type - Hard
Slug Size: 500MB - Hard
Request Length: 30 seconds - Hard
Excuse me, I googled free dyno max storage size but I get some sites like this which have not information about the max capacity of the Heroku free apps!!
enter image description here
I must add that someone else added the my.sassy.girl.s1.web.48-pahe.in file in this rapidly site and I don't know is its size is really 3 GB (but when I trying download it the Firefox browser show it's size in 3 GB), any idea to find out is the size of that file is really 3GB?
Thanks.
The 500 MB limit is that of the slug - the code and other assets in your Git repository that you're deploying.
Heroku dynos also have temporary storage you can utilize, but it's important to note that any files placed on these dynos disappears after any dyno reboot. That means every 24 hours (as well as after any deployment) your files not in your Git repository will all go away.
https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
User-uploaded files should go to static storage like Amazon S3.
https://devcenter.heroku.com/articles/s3

Magento Admin suddenly slowed down

We have Magento EE 1.14. Admin was working fine till last two days its speed dropped dramatically. Frontend is not affected. Also no changes in code or server configuration. here is my attempt to fix the problem but nothing worked:
Log cleaning is properly configured
removed two unused extensions. but no improvement
tried to disable non-critical extensions to see if speed will improve but also not luck.
I can NOT use REDIS cache at this time. but configured new server which is using REDIS cache and move to it next month.
sometimes backend will gain speed for few minutes
I enabled profilers the source of the delay is mage ( screenshot attached ).
here are my question:
Is there anyway to know the exact reason for Mage delay ?
do I have other test i can use to identify the cause of delay ?
Thanks in advance,
It could be delay on external resources connection. Do you have new relic or similar software? Check there for slow connections. If You don't have NR, profile admin by blackfire.io. Magento profiler is really unhelpful :)
Follow below steps:
Delete unused extensions
It is best to remove unused extensions rather than just disabling them. If you disable an extension, it will still exist in the database. It would not only increase the size of your database (DB) but it also adds to the reading time for DB. So, keep your approach clear: If you don’t need it, DELETE it.
Keep your store clean by deleting unused and outdated products
One should keep in mind that a clean store is a fast store. We can operationalize the front-end faster by caching and displaying only a limited set of products even if we have more than 10,000 items in the back-end, but we cannot escape their wrath. If the number of products keeps on increasing at the backend, it may get slower, so it is best to remove unused products. Repeat this activity in every few months to keep the store fast.
Reindexing
One of the basic reasons why website administrators experience slow performance while saving a product is because of reindexing. Whenever you save a product, the Magento backend starts to reindex, and since you have a lot of products, it will take some time to complete. This causes unnecessary delays.
Clear the Cache
Cache is very important as far as any web application is concerned so that a web server does not have to process the same request again and again.

Remote API server slowiness

In our server we reach api.twitter.com and use REST API of Twitter. Until 3 days ago we had no problems. But since that time we have slowiness problem.
Regarding to Twitter API status page there is no problem. But we have very big delays.
We make 350-400 requests per minute.
Before, we had a performance of 600-700 ms. per request. (Snapshot image)
But now it became 3600-4000 ms per request. (Snapshot image)
It doesn't look like a temporary slowiness because it remains nearly for 3 days.
What did I check:
- I didn't make any big code change in our repo. Also when we make minimal reuqests with just one line of request, we still get this slowiness.
- I check server speed with Ookla's speedtest. It looks good. 800 Mb/s download, 250 Mb/s upload.
- We don't have any CPU, RAM or disk problem. CPU average is 30%, RAM is 50% loaded, disk IO is nearly 4-5%.
So what would be the probable causes ?
I can check them and update question.
(Centos 6.5, PHP 5.4.36, Nginx 1.6, Apache/2.2.15, Apache run PHP as PHP module, XCache 3.2.0)

How much disk space do heroku plans have?

I creating an app that works like an DMS(Document Management System) so my client will be uploading PDF's, XLS's and DOC's.
You don't want to be uploading anything to Heroku, it has an ephemeral file system which is reset on restarts/deploys. Anything uploaded should be uploaded to a permanent file store like Amazon S3
https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
From How much disk space on the Dyno can I use? of the heroku help site:
Issue
You need to store temporary files on the Dyno
Resolution
Application processes have full access to the available, unused space on the mounted /app disc, allowing your application to write gigabytes of temporary data files. To find approximately how much space is available for your current Dyno type, run the CLI command heroku run "df -h" --size=standard-1x -a APP_NAME, and check the value for the volume mounted at /app.
Different Dyno types might have different size discs, so it's important that you check with the correct Dyno size
Please note:
Due to the Dynos ephemeral filesystem, any files written to the disc will be permanently destroyed when the Dyno is restarted or cycled. To ensure your files persist between restarts, we recommend using a third party file storage service.
The important part here is that it is not the same value for every plans and is possibly subject to changes with time:
Different Dyno types might have different size discs, so it's important that you check with the correct Dyno size
The correct answer is that it would appear you have 620 GB.
According to this answer: https://stackoverflow.com/a/16938926/3973137
https://policy.heroku.com/aup#quota
Network Bandwidth: 2TB/month - Soft
Shared DB processing: Max 200msec per second CPU time - Soft
Dyno RAM usage: Determined by Dyno type - Hard
Slug Size: 500MB - Hard
Request Length: 30 seconds - Hard
Maybe you should think about storing data on amazon s3?

Resources