Want to know a request is served by which dyno at heroku - ruby

I have a sinatra app deployed at heroku, and I have scaled web worked to 3 dynos, so requests are being served by
web.1
web.2 and
web.3 respoectively.
I want to know in ruby, from within a conroller action that current request is being served by which dyno and then want to keep this in database. I did a bit of google but not found any satisfactory answer.
Any Help would be greatly appreciated.
Thanks

There is really no way to know this. You don't get any HTTP headers from Heroku that specify which Dyno is handling the request, so there's no way to tell. The best you can do is have Heroku stream all your logs somewhere (syslog drain) so that you can parse the logfiles and match request URIs to Dynos.

There's probably a very hacky way to do this on boot with a worker process.
You need to be able to monitor the output from heroku logs --tail, see https://github.com/dblock/heroku-commander for an example. The worker reads logs.
The worker makes an HTTP request to the app, eg. GET app/dyno?id=uuid1. The response is the MAC address of the dyno that responded, eg. mac1.
The worker can see in the logs that uuid1 went to web.5, which responded with its mac. Bingo, the worker now knows.
PUT app/dyno?mac1=web.5&mac2=web.6, etc. Each dyno that receives this will compare its mac to one of the macs and respond true/false that it now knows who it is.
Repeat until the worker has reached all dynos and all dynos know.
Watch for dyno restarts periodically.
You got to wonder though why you need to know that you're "web.1". Why can't it be a unique UUID, like the MAC address of the machine?

Related

How does Heroku dyno caching works with Play framework

I have a Play application hosted by Heroku with 5 dynos. Its seems like my dynos were been restarted randomly in different time period. For example, three of them were restarted by itself 22 hours ago, and two of them were restarted 10 hours ago (not sure if this time was triggered by clearing the cache). It seems like that cached data isn't persistent between dynos. My problem is when I sent same request to my Heroku application multiple times, I get different cached response, in the response, some are most up to date data, others were old data. I assume this is because my request was processed by different dyno. After restart all my dyno fixed the problem(I assume this also clear cache in all dynos).
So I want to know what triggered the random dyno restart, and why it does that?
How to solve cached data inconsistency in this case?
Thanks
I think you should use mutualised cache in order to avoid this kind of problem when you scale horizontally.
Couchbase is a good solution to do it. We use this internally at Clever Cloud (http://www.clever-cloud.com/en/), that is the reason why we released a Couchbase as a service.
As for dyno restarts, did you try the documentation? Dyno's are cycled at least once per day

How Heroku finds out if process has failed to start?

Does anybody knows explicit criteria that Heroku uses to tell that application hasn't started in 60s? Which URLs it ping? Which response codes considered acceptable? I couldn't find such details in documentation.
For web dynos the measure is the amount of time it takes for something on the dyno to bind to $PORT

What are the key indicator for Heroku apps to scale up web dyno

I am using New Relic standard and Rails 3 on Heroku to build a web site. But not sure what indicators shown on New Relic should I keep an eye on to scale up the web dyno when certain criteria are met?
Say, indicator A comes to X level, I should add one Dyno to put it down.
Thank you!
Primarily you want to be looking at your logs and at the queue attribute on the heroku[router] - if this starts going up (and importantly staying up) then you have too many requests that are being queued and can't be processed fast enough.
If you're seeing long queue-wait times in the New Relic dashboard and there are no other good explanations (i.e. high variability in response times, use of Thin web server instead of Unicorn, etc.), that's generally a good indication requests are waiting in queue to be processed by a dyno.

Will my Heroku app working on 1 web dyno stay active if I pay for 1 worker dyno?

I currently have an app deployed on Heroku which runs on two web dynos so it won't go to sleep if it remains inactive for a certain time.
Now if I scale it down to only one web dyno (free) and instead pay for one worker dyno, will Heroku always keep my app active?
It will still idle - you NEED to have more than a single web dyno
https://devcenter.heroku.com/articles/dyno-idling
You can also use the New Relic add-on to monitor your app and keep it alive. There is a tab in settings to configure availability monitoring.
You can also avoid a single web dyno from idling by using a monitoring service like pingdom.com since it's periodically sending a request to your web dyno.
Try Pingdom. Free plans include one website check. I use this service to keep my app active all the time.
Pingdom tests your websites and other infrastructure components as
often as every minute to make sure it is all up and running.
From Pingdom Homepage
Pingdom does this by "pinging" or rather requesting a resource from your website on a regular interval. This has the side effect of keeping your website "active", cache's primed, etc.. because your website is seeing regular "traffic" (the requests coming from pingdom).
Try Un-idler. You don't need to sign in and it's free.
http://unidler.herokuapp.com/
You can try http://kaffeine.herokuapp.com/ it will ping your app every 30 minutes so your app won't go to sleep.
Try CloudUp. It visits your apps periodically to keep them awake. It is free, and you can add as many apps as you want. It also activates apps on Google App Engine and Azure.

Heroku: web dyno vs. worker dyno? How many/what ratio do I need?

I was curious as to what the difference between web and worker dynos is on Heroku. They give a one sentence explanation on their pricing page, but this just left me confused. How do I know how many to pick of each? Is there a ratio I should aim for? I'm pretty new to this stuff, so can someone give an in depth explanation, or maybe some sort of way I can calculate how many and which kind of dynos I would need?
Also, I'm confused about what they mean by the amount of hours for each dyno.
http://www.heroku.com/pricing
I also happened upon this article. As one of their suggested solutions, they said to increase the amount of dynos. Which type of dyno are they referring to here?
http://devcenter.heroku.com/articles/backlog-too-deep
Your best indication if you need more dynos (aka processes on Cedar) is your heroku logs. Make sure you upgrade to expanded logging (it's free) so that you can tail your log.
You are looking for the heroku.router entries and the value you are most interested is the queue value - if this is constantly more than 0 then it's a good sign you need to add more dynos. Essentially this means than there are more requests coming in than your process can handle so they are being queued. If they are queued too long without returning any data they will be timed out.
There's no ideal ratio I'm afraid, you could have an app doing 100 requests a second needing many web processes but just doesn't make use of workers. You only need worker processes if you are doing processing in the background like sending emails etc etc.
ps Backlog too deep would be a Dyno web process that would cause it.
UPDATE: On March 26 2013 Heroku removed queue and wait fields from the log out put.
queue and wait fields have been removed from router log messages.
Also, the Heroku router no longer sets X-Heroku-Dynos-In-Use,
X-Heroku-Queue-Depth and X-Heroku-Queue-Wait-Time HTTP headers for
incoming requests.
Dynos are basically processes that run on your instance. With the new Cedar stack, they can be set up to execute any arbitrary shell command. For web applications, you generally have one process called "web" which is responsible for responding to HTTP requests from users. All other processes are what were previously called "workers." These run continuously in the background for things like cron, processing queues, and any heavy computation that you don't want to tie up your web processes. You can also scale each type of process, so that multiple processes of each type will be booted up for additional concurrency. The amount of each that you use really depends on the needs of your application and the load it receives. You can use tools like the New Relic plugin to monitor these things. Take a look at the articles on the Process Model and the Procfile in Heroku's dev center for more details.
A number of people have mentioned that there is no known ratio and that the ratio of web workers to 'background' workers you will want is dependent on how you designed your application - that is correct. However I thought it might be useful to add that as a general rule of thumb, you want your web workers - and thus the controller actions they are serving - to be lightning quick and very lightweight, to reduce latency in response times from browser actions. If there is some browser action that would require more than, say, about a half a second of real time to serve, then you will probably want to architect some sort of system that pushes the bulk of that action on to a queue.
You would then design an offline worker dyno(s) that will service this queue. They can take much longer because there are no HTTP responses pending on their output. Perhaps the page you rendered from the initial browser request that pushed the action will serve some Javascript that starts a thread that checks to see if the request has finished every 5 seconds, or something along those lines.
I still can't give you a ratio to work with for the same reason others have given, but hopefully this helps you decide how to architect your app. (I should also mention this is just one design out of many valid ones.)
https://stackoverflow.com/a/19965981/1233555 - Heroku has gone to random routing, so some dynos can have queues stacking up (while they serve a lengthy request) while other dynos are free. Avoid this by making sure that all requests are handled very quickly in your web dynos. This will reduce the number of web dynos you need, while requiring more worker dynos.
You also need to care about your web app supporting concurrency, which only some Rails configs do - try Unicorn, or carefully-written code (for I/O that doesn't block the EventMachine) with Thin.
You probably have to try, rather than calculate, to see how many dynos of each kind you need. Make sure their New Relic reports the dyno queue - see the above link.
Short answer is that you need as many as you need to keep your queues down.
As John describes, if you start seeing a queue in your logs then you need more dynos. If you start seeing your background queues getting too long (how you get this info is dependant on what you have implemented) then you need more workers.
There is no ratio as it's very much dependent on your application design and usage.

Resources