I have a ruby app deployed on a server and redis on another server. What are the pros and cons of deploying sidekiq on same server as ruby app?
Probably better for Serverfault...
But the biggest pro for keeping them separate is that you can scale your application servers and point them all to your redis server, so you can more easily scale your application horizontally.
When you've got both on a single server, it might be a bit easier/cheaper to manage, but you'll never be able to scale it separately and Redis will be eating a bit of your RAM that your application won't be able to use.
Related
For a pure Rack app running on a Heroku hobby dyno with a Heroku Postgres hobby dev add-on, how do you know how many workers & threads to configure Puma to have?
Based on this article it seems like you'd be safe running 2-4 processes within a single Heroku web dyno depending on your memory usage. For threads, I'd stick to the default (5) depending on the needs of your app.
I'd recommend tuning your app to use a particular config then keeping an eye on the Heroku logs for a few days to see if you get too many R14 errors. At that point, you know you've exhausted the dyno and should scale it back.
It depends greatly on how memory-hungry your application is. Given that it's a pure Rack app and most of the literature out there are for Rails apps - I'd imagine that your optimal values are higher.
The Librato add-on is really helpful here in letting you see your memory usage in near real-time, so you can quickly tweak and monitor how close you are to the 512MB limit. There's a free tier there, and it doesn't need any additional instrumentation either (I'm not affiliated with them in any way, but we do use their service!)
How many requests can Heroku's "Vegur" http proxy handle for a simple "hello world" before hitting the limits (if any)?
Will setting up nginx with ec2 micro, serving same index.html, allow more Throughput ?
Does heroku throttle the requests per dyno?
Heroku Dynos are all small processes running on EC2 machines behind the scenes. Therefore, it will almost always be more performant to run identical code on an EC2 server directly as opposed to Heroku, because when you're using Heroku you're sharing a server with other developers.
With that said, Heroku isn't really about having the fastest server -- it's about simplifying your entire development and deployment stack as much as possible to:
Avoid downtime.
Force you to architect code properly.
Make it easier to scale your project as it grows.
etc.
I currently having below issue: I run multiple memcached instances across various servers, if any of the server go down, my application screws up(increases hits to backend). Should I use moxi or couchbase ?. I don't want to change in my code.
If you use memcached today, install Moxi on each application server and point it at a Couchbase cluster with a Couchbase bucket. That is the whole idea around moxi. It is meant to be that bridge between memcached and Couchbase. Just do not make the mistake that some people do and install Couchbase on the same nodes as your application servers like some do with memcached.
Just know that you will not get the full features that Couchbase offers, as some of that requires the SDKs, but you will get the pieces it sounds like you need initially.
So my web app is hosted on amazon using Opswork.
Currently I have a 1 dedicated instance for Postgresql, 1 instance as my webserver, and another dedicated instance running Redis for caching purposes.
I would like to improve the performance by adding Varnish. Given my architecture where should I install varnish? and also taking into account I may soon outgrow this solucion and be using more webservers behind a loadbalancer.
Any help would be appreciated!
Bye
Varnish will always be quicker if you run it with memory storage - so the one with the most free memory would be a good pick. Even if you don't have enough to spare for the storage, it also uses quite some memory for the connection handling when you reach a bit more traffic.
Further along the road when you want a load balancer a good start would be to use a dedicated server for varnish that also can do load balancing just fine. It's not as effecient as a lightweight dedicated loadbalancer but until you need multiple varnish servers (way down the road) there is generally no point in using anything before it.
You should use Varnish in front of Apache Web Server. However it's fine to reside on Web Server itself and point Load Balancers to Varnish.
I’m hoping some of you with experience using amazon EC2 could offer some advice… of course it’ll be subjective which is fine, I’m pretty sure your guestimate would be better than mine.
I am planning on moving all my client’s websites from shared hosting environments to Amazon EC2. They’re all pretty low traffic sites (the busiest site receives around 50 unique visitors a day). There’s about 8 sites, but I may expand this as I take on more projects and host more sites… current capacity planning is for say 12 sites.
Each site runs on ASP.Net (Umbraco CMS), and requires a SQL Server database.
My thoughts are one of the following:
Setup a Small Instance (1.7gb RAM, 1 EC2 Compute Unit), and run IIS and SQL Server Express on that server.
Setup 2 Micro Instances (613MB Ram each, Up to 2 EC2 Compute Units) – one for IIS, the other for SQL Server.
Which arrangement do you think would work the best for my requirements. I’ve started setting up a Micro instance with Server 2008, SQL Server Express, etc… and finding it not coping with the memory requirements, hence considering expanding. I could always configure on a Small instance, then export the AMI and fire it up in a Micro instance after, and do the same every time any serious changes to the server are required. I guess I could even do all updates etc on a spare Small Spot instance, then switch load that AMI up in a Micro and transfer the IP Address across, so I don’t need to do too much work on the production servers. I figure if I store all my website data files on EBS Volumes, then it should be fairly easy to move hosting between servers with minimal downtime, while never working on a production server.
I’m interested to know what you all think, and what strategies you employ for such activities as upgrades, windows updates, software installations, etc.
And what capacity do you think I’d need for my requirements.
Cheers
Greg
Well, first-up, Server 2008 doesn't play well in the 613MB RAM the Micro instance gives you. It runs, but it's a dog, and it barks louder the more services (IIS, SSE, etc) you layer on top. We using nothing smaller than a Small for Server 2008, and in fact typically do the environment config in a Medium and scale down to Small once the heavy lifting is complete and the OS is ready to use. Server 2003, however, seems to breathe easier on a Micro - but we still do the config on a larger instance and scale down.
We're running low-traffic websites on Server 2003/IIS6 in a Micro, with a Server 2008/SS install on a shared, separate, Small instance. We do also have one Server 2008/IIS7 Micro build running, but only to remind ourselves why we don't use it more widely. ;)
Larger websites run Server 2008/IIS7 in either Small or Medium instances, but almost always still using that shared separate SS instance for database services. We try not to deploy multiple SS installations, since it makes maintenance and backups more complex.
Stashing content and config on EBS Volumes is of course good practice, unless you like rebuilding the entire system whenever an Instance disappears. Snapshotting your Instances periodically is also good practice, since you can spin-up a new Instance from a baseline AMI and swap the snapshot in as a boot Volume for fast recovery in the event of disaster.