I'm using Heroku Scheduler to run a few apps each night. I believe they are what Heroku would call a "worker apps" because they run for about 2 hours each and only need to run once per day. That said, I can't figure out how to change them from being web apps to worker apps?
For context, I'm using the free tier of Heroku.
You need to create a file called Procfile in your root.
Found the answer here: https://stackoverflow.com/a/44122238/12020295
So, change your Procfile to read (with your specific command filled in):
worker: node index.js
and then also run on CLI:
$ heroku scale worker=1
Related
I've previously deployed apps on heroku written in Rails and in Express.js, and never come across the concept of a Procfile before. Now that I've just gone to deploy a Flask app, I discovered this Procfile concept and found that the app would not run correctly without it. The Heroku docs say nothing about this being Flask-specific, and imply it's needed for all apps.
What's up with that? Why didn't I need it before, but needed it now?
In the package.json it tells Heroku how to execute the program. That's JavaScript specific. In other languages there is no such file hence the need for a Procfile.
Heroku needs to know how to execute your project.
What is the difference between heroku ps:exec and heroku run bash? I am just trying to understand the concept. Both seem to be establishing an SSH-tunnel to a remote container/dyno. So why does heroku ps:exec require a dyno-restart on the first use? It seems this command is more generic (since it uses a default shell), so what needs to be configured/installed for it?
heroku run bash creates a standalone (ie not associated with any particular process) that has your application code available and gives you a bash session. This is helpful for running one-off tasks like a database migration it can also be helpful to debug issues where you need to look at the filesystem.
heroku ps:exec tunnels to a dyno that is already running as part of your formation. For instance, if you had 5 web dynos you could tunnel directly to web.3 for instance. This is useful in situations where a dyno is exhibiting issues (memory pressure or high load for example). Being able to connect to the problematic dyno is very useful for debugging.
You should also note that your config vars (ie environment vars set on the heroku settings tab) are not set in heroku ps:exec session.
I can't say for certain why a restart is required but I imagine that some configuration needs to change to enable a connection to a dyno already running in the fleet.
I am sharing a MongoDB between a few heroku apps and would like to move the ownership/billing to another app.
I have tried attaching (heroku addons:attach) to the receiving app and then using the heroku addons:detach command on the billing app, but this doesn't work.
Heroku support just confirmed that it is not possible to change the billing app for an add-on.
You can attach the add-on to other apps, but if you delete the original billing app then it will instantly delete your database without warning - even if it is attached to other active apps.
Some add-ons like Postgres offer a fork option, so you may be able fork, reconnect to the new instance, and delete the old database. The Postgres fork command looks like:
heroku addons:create heroku-postgresql:standard-0 --fork the-old-app-name::HEROKU_POSTGRESQL_CHARCOAL --app the-new-app-name
Where HEROKU_POSTGRESQL_CHARCOAL is the ENV variable name of your old database on your old app.
Is it possible, and if so, how? I'd like to be able to reach it from my existing Heroku infrastructure.
Will I need a Procfile? From what I understand it's just a standalone binary written in Go! so it shouldn't be that hard to deploy it, I'm just curious how to deploy it because I don't think I understand the ins and outs of Heroku deployment.
Heroku Dynos should not be used to deploy a database application like InfluxDB.
Dynos are ephemeral servers. Data does not persist between dyno restarts and cannot be shared with other dynos. Practically speaking, any database application deployed on a dyno is essentially useless. This is why databases on Heroku (e.g. Postgres) are all Add-ons. InfluxDB should be set up on a different platform (like, AWS EC2 or a VPS) since a Heroku Add-on is not available.
That said, it is possible to deploy InfluxDB to a Heroku dyno.
To get started, it is important to understand the concept of a 'slug'. Slugs are containers (similar to a Docker images) which hold everything needed to run a program on Heroku's infrastructure. To deploy InfluxDB, an InfluxDB slug needs to be created.* There are two ways to create a slug for Go libraries:
Create a slug directly from a Go executable as described here.**
Build the slug from source using the Heroku Go buildpack (explained below).
To build the slug from source using a buildpack, first clone the InfluxDB Github repo. Then add a Procfile at the root of the repo, which tells Heroku the command to run when the dyno starts up.
echo 'web: ./influxd' > Procfile
The Go buildpack requires all dependencies be included in the directory. Use the godep dependency tool to vendor all dependencies into the directory.
go get github.com/tools/godep
godep save
Next, commit the changes made above to the git repo.
git add -A .
git commit -m dependencies
Finally, create a new app and tell it to compile with the Go buildpack.
heroku create -b https://github.com/kr/heroku-buildpack-go.git
git push heroku master
heroku open // Open the newly created InfluxDB instance in the browser.
Heroku will show an error page. An error will be displayed because Heroku's 'web' process type requires an app to listen for incoming requests on the port described by the $PORT environment variable, otherwise it will kill the dyno. InfluxDB's API and admin panel run on ports 8086 and 8083, respectively.
Unfortunately, InfluxDB does not allow those ports to be set from environment variables, only through the config file (/etc/config.toml). A small bash script executed before InfluxDB starts up could set the correct port in the config file before InfluxDB starts up.
Another problem, Heroku only exposes one port per dyno so the API and the admin panel cannot be exposed to the internet at the same time. A smart reverse proxy could work around that issue using Heroku's X-Forwarded-Port request header.
Bottom line, do not use Heroku dynos to run InfluxDB.
* This means the benefits of a standalone Go executable are lost when deploying to Heroku, since it needs to be recompiled for Heroku's stack.
** Creating a slug directly from the InfluxDB executable does not work because there is no built-in way to listen to the right port given by Heroku in the $PORT environment variable.
I like to think anything is possible on a Heroku node when using a custom buildpack, but there are some considerations when hosting with Heroku:
ops, e.g. backup, monitoring (does it entail installing extra services, opening extra ports, etc - Heroku might get in the way here)
performance, considering dyno size
and if you need a larger dyno, cost becomes an issue. You'll get more bang for your buck when you go the IaaS route.
other "features" of a dyno, e.g. disk ephemerality
I highly recommend hosted InfluxDB or spinning up your own on a VPS, all of which you can point your existing Heroku-based apps to. It will then help to get those instances as close together as possible (i.e. same region, or co-located if possible), presuming a need for low latency between DB and app stack.
A bad side of pushing to Heroku is that I must push the code (and the server restarts automatically) before running my db migrations.
This can obviously cause some 500 errors on users navigating the website having the new code without the new tables/attributes: the solution proposed by Heroku is to use the maintenance mode, but I want a way with no downside letting my webapp running everytime!
Is there a way? For example with Capistrano:
I prepare the code to deploy in a new dir
I run (backward) migrations and the old code continue to work perfectly
I swith mongrel instance to the new dir and restart the server
...and I have no downtime!
You could setup a second Heroku app which points to the same DB as your primary production app and use the secondary app to run your DB migrations without interrupting production (assuming the migrations don't break the previous version of your app).
Let's call the Heroku apps PRODUCTION and STAGING.
Your deploy sequence would become something like:
Deploy new code to STAGING
git push heroku staging
Run database migrations on STAGING (to update PROD db)
heroku run -a staging-app rake db:migrate
Deploy new code to PRODUCTION
git push heroku production
The staging app won't cost you anything since you won't need to exceed Heroku's free tier and it would be pretty trivial to setup a rake deploy script to do this for you automatically.
Good luck!
If you're able to live with two versions of the same app live at the same time, Heroku now has a preboot feature.
https://devcenter.heroku.com/articles/preboot
The only method to improve the process somewhat is what this guy suggests. This is still not a hot deploy scenario though:
http://casperfabricius.com/site/2009/09/20/manage-and-rollback-heroku-deployments-capistrano-style/
One thing I would suggest is pushing only your migrations up to Heroku first and running them before you push your codebase. This would entail committing the migrations as standalone commits and manually pushing them each time (which is not ideal). I'm very surprised there is not a better solution to this issue with all of the large apps hosted on Heroku now.
You actually will have some downtime when Heroku restarts your app. They have a new feature called Preboot that starts up new dynos before taking out the old ones: https://devcenter.heroku.com/articles/labs-preboot/
As for database migrations, that article links to this one on how to deal with that issue: http://pedro.herokuapp.com/past/2011/7/13/rails_migrations_with_no_downtime/
I first commit the migrations, run them, then push the rest of the code. Add a single file like so:
git commit -m 'added migration' -- db/migrate/2012-your-migration.rb
Heroku can't deploy by capistrano. You are block by tool released by heroku.
The no downtime system is impossible in all cases. How change your schema with big change without stop your server. If you don't stop it, you can avoid some change and your database can be inconsistent. SO the using of maintenance page is a normal solution.
If you want a little solution to avoid problem is a balancing in two server. One with only read database during your migration. You can switch to this instance during your migration avoiding the maintenance page. After your migration you come back to your master.
Right now I don't see any possibility to do this without downtime. I hate it too.
This console command does it in the smallest amount of time I can think of
git push heroku master &&
heroku maintenance:on &&
sleep 5 &&
heroku run rails db:migrate &&
sleep 3 &&
heroku ps:restart &&
heroku maintenance:off
git push heroku master to push the master branch to heroku
heroku maintenance:on to put on maintenance so no 500s
sleep 5 to let the dynos start up the new code (without it, the migration might fail)
heroku run rails db:migrate to do the actual migration
heroku ps:restart out of experience the restart makes sure the new dynos have the latest schema
heroku maintenance:off turns of the maintenance
You might have to add -a <app name> behind all heroku commands if you have multiple apps.
Just one command will run these in series in terminal on Mac OSX.