I currently want to deploy a deep learning REST API using Flask on Heroku. The weights (Its a pre-trained BERT model) are stored here*as a .zip file. Is there a way I can directly deploy these?
From what I currently understand I have to have these uploaded on Github/S3. That's a bit of a hassle and seems pointless since they are already hosted. Do let me know!
Generally you can write a bash script that unzips the content and then you execute your program. However...
Time Concern: Unpacking costs time. And the free tier heroku workers only work for roughly a day before being forcefully restarted. If you are operating a web dyno the restarts will be even more frequent and if it takes too long to boot up the process fails (60 seconds to bind to $PORT)
Size Concern: That zip file is 386 MB big and when unpacked liklier to be even bigger.
Heroku has a slug size limit of 500 MB see: https://devcenter.heroku.com/changelog-items/1145
Once the zip file is unpacked you will be over the limit. The zip file itself + its unpacked content is well over 500 MB. You need to pre-unpack it and make sure the files are less than 500 MB. But given that the data is zipped already 386 MB and unpacked it will be bigger. Furthermore you will rely on some buildpacks (python, javascript, ...) that and processing it will take memory. You will go well over 500 MB.
Which means: You will need to pay for Heroku services or look for a different hosting provider.
Related
My Heroku app has more that 3gb storage capacity and i want to is it true?
enter image description here
based of Heroku site it must have 500MB as you could see here :
Heroku has certain soft and hard limits in using its service. Hard
limits are automatically enforced by the Service. Soft limits are
consumable resources that you agree not to exceed.
Network Bandwidth: 2TB/month - Soft
Shared DB processing: Max 200msec per second CPU time - Soft
Dyno RAM usage: Determined by Dyno type - Hard
Slug Size: 500MB - Hard
Request Length: 30 seconds - Hard
Excuse me, I googled free dyno max storage size but I get some sites like this which have not information about the max capacity of the Heroku free apps!!
enter image description here
I must add that someone else added the my.sassy.girl.s1.web.48-pahe.in file in this rapidly site and I don't know is its size is really 3 GB (but when I trying download it the Firefox browser show it's size in 3 GB), any idea to find out is the size of that file is really 3GB?
Thanks.
The 500 MB limit is that of the slug - the code and other assets in your Git repository that you're deploying.
Heroku dynos also have temporary storage you can utilize, but it's important to note that any files placed on these dynos disappears after any dyno reboot. That means every 24 hours (as well as after any deployment) your files not in your Git repository will all go away.
https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
User-uploaded files should go to static storage like Amazon S3.
https://devcenter.heroku.com/articles/s3
The instance started to fail on mysql... the mysql stops from nowhere. Then i was re-starting it and it worked again.
But now, it got worse. The transfer rate when i download a file from another instante starts in 900kbps and keep going down till 20kbps. Also for external downloads.
I tested also a zip job, zipping a big file.... it starts quickly then it slows down and keeps a rate of 10 files zipped per second wich is too slow ( another instances gets 1000 files per sec).
I can't access trough http the websites hosted also because its too slow.
I have already reboot, stop->start. Also i made an image e rebuild the image in a new instance and the problem continues. I also changed the Volume used by the instance and the volume with the problem keeps slow.
What should i do?
I'll take a stab (based on the lack of details), that your instance is just to small - sounds like you are running a web server and mysql on it at least, I'll also bet that you are trying to use a micro instance which have notoriously variable performance statistics and really aren't suitable for running a web server and database server with consistent performance (maybe ok for development, but not production imo).
Try just spinning up a larger instance and see if your problems magically go away; you can test this for a few dollars and if it does solve the problem you can decide if you can permanently upgrade the image size.
I creating an app that works like an DMS(Document Management System) so my client will be uploading PDF's, XLS's and DOC's.
You don't want to be uploading anything to Heroku, it has an ephemeral file system which is reset on restarts/deploys. Anything uploaded should be uploaded to a permanent file store like Amazon S3
https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
From How much disk space on the Dyno can I use? of the heroku help site:
Issue
You need to store temporary files on the Dyno
Resolution
Application processes have full access to the available, unused space on the mounted /app disc, allowing your application to write gigabytes of temporary data files. To find approximately how much space is available for your current Dyno type, run the CLI command heroku run "df -h" --size=standard-1x -a APP_NAME, and check the value for the volume mounted at /app.
Different Dyno types might have different size discs, so it's important that you check with the correct Dyno size
Please note:
Due to the Dynos ephemeral filesystem, any files written to the disc will be permanently destroyed when the Dyno is restarted or cycled. To ensure your files persist between restarts, we recommend using a third party file storage service.
The important part here is that it is not the same value for every plans and is possibly subject to changes with time:
Different Dyno types might have different size discs, so it's important that you check with the correct Dyno size
The correct answer is that it would appear you have 620 GB.
According to this answer: https://stackoverflow.com/a/16938926/3973137
https://policy.heroku.com/aup#quota
Network Bandwidth: 2TB/month - Soft
Shared DB processing: Max 200msec per second CPU time - Soft
Dyno RAM usage: Determined by Dyno type - Hard
Slug Size: 500MB - Hard
Request Length: 30 seconds - Hard
Maybe you should think about storing data on amazon s3?
So far I get an average of 700 kilobytes per second for downloads via chrome hitting an ec2 instance in virginia (us-east region). If I download directly from s3 in virginia (us-east region) I get 2 megabytes per second.
I've simplified this way down to simply running apache and reading a file from a mounted ebs volume. Less than one percent of the time I've seen the download hit around 1,800 kilobytes per second.
I also tried nginx, no difference. I also tried running a large instance with 7GB of Ram. I tried allocating 6GB of ram to the jvm and running tomcat, streaming the files in memory from s3 to avoid the disk. I tried enabling sendfile in apache. None of this helps.
When I run from apache reading from the file system, and use a download manager such as downthemall, I always get 2 megabytes per second when downloading from an ec2 instance in virginia (us-east region). It's as if my apache is configured to only allow 700 megabytes per thread. I don't see any configuration options relating to this though.
What am I missing here? I also benchmarked dropbox downloads as they use ec2 as well, and I noticed I get roughly 700 kilobytes per second there too, which is way slow as well. I imagine they must host their ec2 instances in virginia / us-east region as well based in the speed. If I use a download manager to download files from dropbox I get 2 megabytes a second as well.
Is this just the case with tcp, where if you are far away from the server you have to split transfers into chunks and download them in parrallel to saturate your network connection?
I think your last sentence is right: your 700mbps is probably a limitation of a given tcp connection ... maybe a throttle imposed by EC2, or perhaps your ISP, or the browser, or a router along the way -- dunno. Download managers likely split the request over multiple connections (I think this is called "multi-source"), gluing things together in the right order after they arrive. Whether this is the case depends on the software you're using, of course.
I have over 500 machines distributed across a WAN covering three continents. Periodically, I need to collect text files which are on the local hard disk on each blade. Each server is running Windows server 2003 and the files are mounted on a share which can be accessed remotely as \server\Logs. Each machine holds many files which can be several Mb each and the size can be reduced by zipping.
Thus far I have tried using Powershell scripts and a simple Java application to do the copying. Both approaches take several days to collect the 500Gb or so of files. Is there a better solution which would be faster and more efficient?
I guess it depends what you do with them ... if you are going to parse them for metrics data into a database, it would be faster to have that parsing utility installed on each of those machines to parse and load into your central database at the same time.
Even if all you are doing is compressing and copying to a central location, set up those commands in a .cmd file and schedule it to run on each of the servers automatically. Then you will have distributed the work amongst all those servers, rather than forcing your one local system to do all the work. :-)
The first improvement that comes to mind is to not ship entire log files, but only the records from after the last shipment. This of course is assuming that the files are being accumulated over time and are not entirely new each time.
You could implement this in various ways: if the files have date/time stamps you can rely on, running them through a filter that removes the older records from consideration and dumps the remainder would be sufficient. If there is no such discriminator available, I would keep track of the last byte/line sent and advance to that location prior to shipping.
Either way, the goal is to only ship new content. In our own system logs are shipped via a service that replicates the logs as they are written. That required a small service that handled the log files to be written, but reduced latency in capturing logs and cut bandwidth use immensely.
Each server should probably:
manage its own log files (start new logs before uploading and delete sent logs after uploading)
name the files (or prepend metadata) so the server knows which client sent them and what period they cover
compress log files before shipping (compress + FTP + uncompress is often faster than FTP alone)
push log files to a central location (FTP is faster than SMB, the windows FTP command can be automated with "-s:scriptfile")
notify you when it cannot push its log for any reason
do all the above on a staggered schedule (to avoid overloading the central server)
Perhaps use the server's last IP octet multiplied by a constant to offset in minutes from midnight?
The central server should probably:
accept log files sent and queue them for processing
gracefully handle receiving the same log file twice (should it ignore or reprocess?)
uncompress and process the log files as necessary
delete/archive processed log files according to your retention policy
notify you when a server has not pushed its logs lately
We have a similar product on a smaller scale here. Our solution is to have the machines generating the log files push them to a NAT on a daily basis in a randomly staggered pattern. This solved a lot of the problems of a more pull-based method, including bunched-up read-write times that kept a server busy for days.
It doesn't sound like the storage servers bandwidth would be saturated, so you could pull from several clients at different locations in parallel. The main question is, what is the bottleneck that slows the whole process down?
I would do the following:
Write a program to run on each server, which will do the following:
Monitor the logs on the server
Compress them at a particular defined schedule
Pass information to the analysis server.
Write another program which sits on the core srver which does the following:
Pulls compressed files when the network/cpu is not too busy.
(This can be multi-threaded.)
This uses the information passed to it from the end computers to determine which log to get next.
Uncompress and upload to your database continuously.
This should give you a solution which provides up to date information, with a minimum of downtime.
The downside will be relatively consistent network/computer use, but tbh that is often a good thing.
It will also allow easy management of the system, to detect any problems or issues which need resolving.
NetBIOS copies are not as fast as, say, FTP. The problem is that you don't want an FTP server on each server. If you can't process the log files locally on each server, another solution is to have all the server upload the log files via FTP to a central location, which you can process from. For instance:
Set up an FTP server as a central collection point. Schedule tasks on each server to zip up the log files and FTP the archives to your central FTP server. You can write a program which automates the scheduling of the tasks remotely using a tool like schtasks.exe:
KB 814596: How to use schtasks.exe to Schedule Tasks in Windows Server 2003
You'll likely want to stagger the uploads back to the FTP server.