How does the NewRelic agent know how many instances are running or how much RAM the app is using?
I'm wondering how it can glean so much without running an agent on the system as I thought you could only run your application code on Heroku, and not just any process?
I'll assume you're referring to Ruby, but the agent is not actually different on Heroku except for some accounting integrations due to New Relic/Heroku partnerships.
If you're using the New Relic Add-On for Heroku, the memory usage displayed in the "Instances" tab is an average per process.
New Relic will only track the instances that are being used by the app in the time window that you select.
Related
For anyone who has used Heroku (and perhaps anyone else who has deployed to an PaaS before and has experience):
I'm confused on what Heroku means by "dynos", how dynos handle memory, and how users scale. I read that they define dynos as "app containers", which means that the memory/file system of dyno1 can't be accessed by dyno2. Makes sense in theory.
The containers used at Heroku are called “dynos.” Dynos are isolated, virtualized Linux containers that are designed to execute code based on a user-specified command. (https://www.heroku.com/dynos)
Also, users can define how many dynos, or "app containers", are instantiated, if i understand correctly, through commands like heroku ps:scale web=1, etc etc.
I recently created a webapp (a Flask/gunicorn app, if that even matters), where I declare a variable that keeps track of how many users visited a certain route (I know, not the best approach, but irrelevant anyways). In local testing, it appeared to be working properly (even for multiple clients)
When I deployed to Heroku, with only a single web dyno (heroku ps:scale web=1), I found this was not the case, and that the variable appeared to have multiple instances and updated differently. I understand that memory isn't shared between different dynos, but I have only one dyno which runs the server. So I thought that there should only be a single instance of this variable/web app? Is the dyno running my server on single/multiple processes? If so, how can I limit it?
Note, this web app does save files on disk, and through each API request, I check to see if the file does exist. Because it does, this tells me that I am requesting from the same dyno.
Perhaps someone can enlighten me? I'm a beginner to deployment, but willing to learn/understand more!
Is the dyno running my server on single/multiple processes?
Yes, probably:
Gunicorn forks multiple system processes within each dyno to allow a Python app to support multiple concurrent requests without requiring them to be thread-safe. In Gunicorn terminology, these are referred to as worker processes (not to be confused with Heroku worker processes, which run in their own dynos).
We recommend setting a configuration variable for this setting. Gunicorn automatically honors the WEB_CONCURRENCY environment variable, if set.
heroku config:set WEB_CONCURRENCY=3
The WEB_CONCURRENCY environment variable is automatically set by Heroku, based on the processes’ Dyno size. This feature is intended to be a sane starting point for your application. We recommend knowing the memory requirements of your processes and setting this configuration variable accordingly.
The solution isn't to limit your processes, but to fix your application. Global variables shouldn't be used to store data across processes. Instead, store data in a database or in-memory data store.
Note, this web app does save files on disk, and through each API request, I check to see if the file does exist. Because it does, this tells me that I am requesting from the same dyno.
If you're just trying to check which dyno you're on, fine. But you probably don't want to be saving actual data to the dyno's filesystem because it is ephemeral. You'll lose all changes made to the filesystem whenever your dyno restarts. This happens frequently (at least once per day).
I have a small ruby on rails application which i have deployed on an amazon ec-2 instance using capistrano, my instance is a t2.small instance with nginx installed on it and local postgress db installed on the server too. i have a development instance on which i do frequent deployments, recently whenever i try to do a capistrano deployment on my ec-2 instance the cpu-utilization has an enormous spike, usually is its between 20-25% but during deployment for some reason it goes upto 85% which makes my instance unresponsive and i have to do a hard restart on my server to get it back working
I dont know why is this happening and what should i do to solve this because load balancing and auto scaling makes no sense in this scenario as the issue occurs only during deployment
I have attached a screenshot of my server cpu utilization and the 2 high peaks are both when i performed cap deployment
The only solution i can think of is increasing the instance type, but i want to know what other options do i have to solve this. Any help is appreciated, thanks in advance
If this is interim spike (only during installation) and you don't need high CPU during application usage, you may try t2.unlimited approach.
If t2.unlimited couldn't support your need, I think increasing the instance type is the only option left for you.
If I have an app on Heroku that consists of one worker and one or no web dynos, will it run? I'm unsure if the absent or idling web dynos will cause the worker dyno not to run.
Heroku doesn't just run web dynos, in fact, it makes no assumptions at all with regards to the processes you're running. There's absolutely nothing wrong with launching a single worker process.
This is actually a common scenario for me to deploy single cron-like tasks to Heroku, I've written about it here http://blog.y3xz.com/blog/2012/11/16/deploying-periodical-tasks-on-heroku/
If you are looking for cron-like tasks for simple jobs (like I am), now you have another alternative: Heroku Scheduler. It is easy to configure in a dashboard.
Advantage:
No need to choose and learn a new scheduler library. Configure it in seconds.
Same way for different platforms: Python, Ruby, etc.
Save Dyno Hours for Free Plan user. Only the actual working time counts. Some scheduler library (like Rufus Scheduler) will keep running between launches (so that it does not rely on cron to work).
Disadvantage:
Trivial options. You can only choose among "Daily"/"Hourly"/"Every 10 minutes".
Conclusion: Best for basic use.
I'm using System.Runtime.Caching.MemoryCache to simulate a repeated task on a running .NET MVC application deployed on AppHarbor.
Entries in the cache are added using a CacheItemPolicy which contains an AbsoluteExpiration offset and a RemovedCallback that calls a method and retriggers the adding of the item in the cache (as described here)
MemoryCache is populated first time in Application_Start. It works fine locally, but doesn't seem to work once deployed on AppHarbor (tried also with HttpRuntime.Cache, same result).
My application is running under a CANOE (free) account on AppHarbor that only has one worker. Does this mean that I won't be able to simulate the background task until I upgrade to some paid plan?
Thanks!
Your application has to have visitors every once in a while for this to work. Other than StillAlive, Pingdom is also a good bet for generating requests to your app. You should also take a look at MomentApp. We expect to have background tasks ready shortly.
I don't think upgrading will help, they are working on adding background jobs to AppHarbor but to my knowledge they available yet.
What about using a service like https://stillalive.com/ to periodically hit a page on your site that then spins up a new thread and starts running your background task? Its available as a free add-on.
I was thinking of doing something like this while waiting for the background task functionality to be available.
I have a Rails app running on heroku with Rufus Scheduler added on.
A 'once a day' task in the scheduler is running more often than once a day.
My guess would be something to do with the heroku app running on different dynos during the day, but I'm at a loss on how to confirm/fix the problem.
Has anyone else seen this/know of a solution?
Edit: I couldn't resolve the problem with the gem and have moved my app over to the heroku scheduler add on which does not experience this problem.
The Heroku scheduler isn't guaranteed, it's only a simple scheduling system designed to fill a gap. It's nothing to do with your application moving between dynos as it's a seperate management system spinning up one of processes.
If timeliness is essential to you, take a look at clockwork, which will let you configure all sorts of stuff, but also give you a bit more reliability (at the expense of having a clock process running).
If this won't do - simply rework your job so that it doesn't matter how often it runs.