Heroku: Does preboot work if I only have one dyno? - heroku

I currently have one hobby dyno and I'd like to upgrade it to Standard 1x because of the preboot feature laid out here: https://devcenter.heroku.com/articles/preboot
Instead of stopping the existing set of web dynos before starting the
new ones, preboot ensures that the new web dynos are started (and
receive traffic) before the existing ones are terminated. This can
contribute to zero downtime deployments.
The wording is confusing because it sounds like I must have more than 1 dyno for it to work. An old one and a new one. Is this true? Or can I do zero downtime deploys with just 1 standard dyno?

This also works with 1 dyno, since the second dyno then is handled by heroku in the background. We're heavily using it for all kinds of applications.
The article already states most of the important details.

Related

How exactly do dynos/memory/processes work?

For anyone who has used Heroku (and perhaps anyone else who has deployed to an PaaS before and has experience):
I'm confused on what Heroku means by "dynos", how dynos handle memory, and how users scale. I read that they define dynos as "app containers", which means that the memory/file system of dyno1 can't be accessed by dyno2. Makes sense in theory.
The containers used at Heroku are called “dynos.” Dynos are isolated, virtualized Linux containers that are designed to execute code based on a user-specified command. (https://www.heroku.com/dynos)
Also, users can define how many dynos, or "app containers", are instantiated, if i understand correctly, through commands like heroku ps:scale web=1, etc etc.
I recently created a webapp (a Flask/gunicorn app, if that even matters), where I declare a variable that keeps track of how many users visited a certain route (I know, not the best approach, but irrelevant anyways). In local testing, it appeared to be working properly (even for multiple clients)
When I deployed to Heroku, with only a single web dyno (heroku ps:scale web=1), I found this was not the case, and that the variable appeared to have multiple instances and updated differently. I understand that memory isn't shared between different dynos, but I have only one dyno which runs the server. So I thought that there should only be a single instance of this variable/web app? Is the dyno running my server on single/multiple processes? If so, how can I limit it?
Note, this web app does save files on disk, and through each API request, I check to see if the file does exist. Because it does, this tells me that I am requesting from the same dyno.
Perhaps someone can enlighten me? I'm a beginner to deployment, but willing to learn/understand more!
Is the dyno running my server on single/multiple processes?
Yes, probably:
Gunicorn forks multiple system processes within each dyno to allow a Python app to support multiple concurrent requests without requiring them to be thread-safe. In Gunicorn terminology, these are referred to as worker processes (not to be confused with Heroku worker processes, which run in their own dynos).
We recommend setting a configuration variable for this setting. Gunicorn automatically honors the WEB_CONCURRENCY environment variable, if set.
heroku config:set WEB_CONCURRENCY=3
The WEB_CONCURRENCY environment variable is automatically set by Heroku, based on the processes’ Dyno size. This feature is intended to be a sane starting point for your application. We recommend knowing the memory requirements of your processes and setting this configuration variable accordingly.
The solution isn't to limit your processes, but to fix your application. Global variables shouldn't be used to store data across processes. Instead, store data in a database or in-memory data store.
Note, this web app does save files on disk, and through each API request, I check to see if the file does exist. Because it does, this tells me that I am requesting from the same dyno.
If you're just trying to check which dyno you're on, fine. But you probably don't want to be saving actual data to the dyno's filesystem because it is ephemeral. You'll lose all changes made to the filesystem whenever your dyno restarts. This happens frequently (at least once per day).

How to keep only the worker alive on Heroku free tier from within the app?

I'm testing an app with a worker and a web dyno on Heroku free tier and I'd like to keep the worker alive to be able to execute background tasks while letting the web dyno idle. By default they both go idle in 30 mins even if I have things queued on the worker.
I understand there're ways to keep the web dyno alive (and with that the worker as well), and there're ways to keep the web alive while scaling down the worker. However I'd need the worker alive and the web in idle.
I tried running a recurring job on the worker which would
Restart the dyno.
Scale the dyno down and then back up.
Both approaches worked (as in they restarted and scaled the dyno correctly) but the worker dyno would still idle after 30 mins (as if it's dependent on the web dyno). Edit: yep, that's pretty much the case as explained here: https://devcenter.heroku.com/articles/free-dyno-hours#dyno-sleeping
I could do this form the outside but it seems I'd have to constantly check for the state since a new restart doesn't seem to give me 30 mins headway. I'd also have to expose the API key which I'd like to avoid.
If I've gotten you right, you're trying to stop the web dyno and leave the worker dyno alive.
You could do that by going to the Resources tab:
And then in the 'web' section:
Press the pencil, toggle it off and press 'Confirm'.
As a workaround I currently remove the web dyno and explicitly enable it when I need it. As explained here:
Worker-only Free dynos do not sleep since they do not respond to web
requests.
My workaround was to just create two apps that deploy automatically from the same repository. Then, all you would need to do is enable the worker dyno for one and the web dyno for the other.

Heroku Scheduler for recurring tasks and Delayed Jobs for asynchronous tasks, all using one web dyno

Very similar question to Is it feasible to run multiple processeses on a Heroku dyno?, or Running Heroku background tasks with only 1 web dyno and 0 worker dynos except I'm talking about a Ruby on Rails app.
Context:
I understand that it's encouraged to separate worker and web dynos... but I'm still testing and don't want to pay the expense. Especially because with my app, all the web requests pretty much happen either in the AM or in the PM, and during the whole middle of the day (and also middle of the night), literally nothing is happening.
I'd like the web dyno to run two types of background processing on the "downtime":
A recurring, long-running task every day (mailings)
An asynchronous, long-running task that is triggered when a user performs a certain action (it's a mailer)
I've done quite a bit of reading on this, but this is my first time doing anything asynchronous, so I wanted to ask the community a couple of questions just to ensure what I'm trying to do is feasible.
Questions
How do I do activity #1 for free?
To put it bluntly... considering my context above, if I use Heroku's Scheduler add-on, this runs a one-off dyno which I'll be charged for since I use NewRelic now to constantly ping my web dyno so it never actually sleeps meaning my one web dyno is my free dyno. Is there another way of doing this with the one web dyno that, in the middle of the night, won't be processing any requests? Alternatively, is there a way to tell New Relic to ping except at certain times, which will also then allow me to spin up a one-off dyno but still be within my free dyno hours?
For activity #2, I'm thinking of using Delayed Jobs, but how do I tell Delayed Jobs to delay until end of user 1's session, and then run mailer for user 1, but then pause again if user 2 sends a request, and then when user 2 is finished, start where left off on user 1's mailer, and then do user 2's mailer... and so forth? I think the root of the challenge here is that from what I've read, Delayed Jobs needs to be started with a script. But I'm not going to be at my computer starting a script all the time. How do I make the start (and the queue as illustrated in the question) something that happens automatically?
Would love even just point me directional pointers on what methods/ what considerations, etc.
I'm going to check out a nifty gem https://github.com/brandonhilkert/sucker_punch to do this. According to the author, it was written specifically to use Heroku's single dyno for hobby websites that have no need to spin up another dyno. It basically creates another thread.
FYI also, there is an add-on link that allows sucker_punch to do recurring tasks, called https://github.com/facto/fist_of_fury

How can I push to Heroku, and still keep the web app live?

Say for example you had two web dynos set up for your account (0 worker dynos).
To save on switching to maintenance mode, how would one push to one web dyno, and then the other update once the first has finished booting?
You can't do that - pushing to Heroku will result in both dynos being restarted with the contents of the new slug.
However, there is a labs feature called preboot (https://devcenter.heroku.com/articles/labs-preboot) which might accomplish exactly what you want.

What are the key indicator for Heroku apps to scale up web dyno

I am using New Relic standard and Rails 3 on Heroku to build a web site. But not sure what indicators shown on New Relic should I keep an eye on to scale up the web dyno when certain criteria are met?
Say, indicator A comes to X level, I should add one Dyno to put it down.
Thank you!
Primarily you want to be looking at your logs and at the queue attribute on the heroku[router] - if this starts going up (and importantly staying up) then you have too many requests that are being queued and can't be processed fast enough.
If you're seeing long queue-wait times in the New Relic dashboard and there are no other good explanations (i.e. high variability in response times, use of Thin web server instead of Unicorn, etc.), that's generally a good indication requests are waiting in queue to be processed by a dyno.

Resources