Can I disable the automatic restarting of Heroku dynos? - heroku

Is there a way to disable Heroku's automatic restarting of dynos, or at least schedule when they are to restart?

Related

ZeroTier and Heroku Dynos

I would like to connect Heroku dynos to external compute and storage resources (i.e. on-prem, multi-cloud, etc.)
Heroku Private and Shield have VPC peering with AWS, but introduces additional overhead.
Does ZeroTier work with Heroku dynos? If so, are there are any gotchas to be aware of?

Passenger daily restarts nodejs app on CloudLinux

I have a nodejs application running on Apache server (Cloud Linux). The problem is that Passenger restarts the nodejs app daily at midnight, even if users are connected to the web application. I didn't find any cron asking to restart. Here is the Passenger log which is logged daily at midnight:
Checking whether to disconnect long-running connections for process 5519, application [...] (production)
Is there a way to prevent this daily restart?
Or at least to change the restart time?

Force changing the ip of the Heroku dyno

I am wondering if there is a way to reset the dyno IP on Heroku when I want.
I noticed sometimes it changes when the application is restarted. However, not always.
How can we reset the dyno IP address on restart every time?
I already saw these questions:
Does Heroku change dyno IP during runtime?
https://www.quora.com/Do-Heroku-dynos-get-new-IP-addresses-when-redeployed-or-restarted
It turns out by restarting the app (deleting the dynos), the IP is changed, by default.
You can do this by using the Heroku API.

heroku multiple dyno socket.io

I am developing a node.js application with Socket.io and deploying same on Heroku Dyno. Socket.io is using RedisStore with its PUB/SUB. Socket.io client works perfectly fine with one dyno in heroku. But when I increase the number of dyno to more than one (say two), socket io client request does not work.
Please let me know if any specific configuration on client side is needed while setting up heroku for multiple web dyno having socket.io support.
Sorry, but heroku doesn't support sticky session and it's not supported by Socket.io
Sticky load balancing If you plan to distribute the load of
connections among different processes or machines, you have to make
sure that requests associated with a particular session id connect to
the process that originated them.
Using multiple nodes
There's a great thread in an issue on the engine.io github. Helped me understand the issue of sticky sessions, engine.io, and heroku a lot better.
Sticky Sessions are now supported by Heroku - but only if you join their development (beta) program.
In my experience Heroku works well with socket.io when combined with the Socket.io_Redis plugin and that enabled setting.

AppFabric Cache seems unstable

We're trying to use AppFabric distributed cache. After a lot of back and forth with non-domain servers we finally put them in a domain and installation/setup was a bit easier. We got it up and running after fighting through a ton of errors, most of which seems trivial to include some test or more descriptive error message for in AppFabric. "Temporary error" does not explain a lot...
But there are still issues.
We set up 3 servers, one of which is "lead". We finally got the cache working and we confirmed this by pointing a Network Load Balancer to one server at a time confirming that we can set cache at one server and retrieve it at another.
Then I restarted the AppFabric Caching service on all servers and suddenly it is not working. Get-CacheHost says they are up, but we get exceptions like:
ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out
ErrorCode<ERRCA0017>:SubStatus<ES0001>:There is a temporary failure. Please retry later.
Why would this error condition occur by simply restarting the services?
Is AppFabric Cache really ready for production use?
What happens if a server goes offline? Long timeouts?
Are we dependent on the "lead" server being up?
I suspect it will be back up after 5-10 minutes of R&R. It seems to come back by itself sometimes.
Update: It did come up after a few minutes. We have now tested by removing one server from the cluster and it resulted in a long timeout and finally an exception.
We have been debugging this for some time and I'm sharing what we have found so far.
UAC on Windows 2008 actually blocks access to local computer, so commands towards local computer will fail. Start PowerShell as admin or turn off UAC completely to bypass.
Simply changing the config file manually will not work. You need to use export and import commands.
Firewalls are a major issue as the installer opens the 222* range of ports, but the PowerShell tools use other Windows services. Turning off the firewall on all servers (not recommended) solved the problem.
If a server is removed from the cluster there will be an initial timeout before the cluster can operate again.
After restart the cluster uses 2-5 minutes to get back up.
If restarting and one server is not reachable the startup time is increased.
If the server holding the shared fileshare for config is not reachable the services will not start. We tried to solve this by giving each server a private share.

Resources