How many requests can Heroku's "Vegur" http proxy handle for a simple "hello world" before hitting the limits (if any)?
Will setting up nginx with ec2 micro, serving same index.html, allow more Throughput ?
Does heroku throttle the requests per dyno?
Heroku Dynos are all small processes running on EC2 machines behind the scenes. Therefore, it will almost always be more performant to run identical code on an EC2 server directly as opposed to Heroku, because when you're using Heroku you're sharing a server with other developers.
With that said, Heroku isn't really about having the fastest server -- it's about simplifying your entire development and deployment stack as much as possible to:
Avoid downtime.
Force you to architect code properly.
Make it easier to scale your project as it grows.
etc.
Related
I have a ruby app deployed on a server and redis on another server. What are the pros and cons of deploying sidekiq on same server as ruby app?
Probably better for Serverfault...
But the biggest pro for keeping them separate is that you can scale your application servers and point them all to your redis server, so you can more easily scale your application horizontally.
When you've got both on a single server, it might be a bit easier/cheaper to manage, but you'll never be able to scale it separately and Redis will be eating a bit of your RAM that your application won't be able to use.
So my web app is hosted on amazon using Opswork.
Currently I have a 1 dedicated instance for Postgresql, 1 instance as my webserver, and another dedicated instance running Redis for caching purposes.
I would like to improve the performance by adding Varnish. Given my architecture where should I install varnish? and also taking into account I may soon outgrow this solucion and be using more webservers behind a loadbalancer.
Any help would be appreciated!
Bye
Varnish will always be quicker if you run it with memory storage - so the one with the most free memory would be a good pick. Even if you don't have enough to spare for the storage, it also uses quite some memory for the connection handling when you reach a bit more traffic.
Further along the road when you want a load balancer a good start would be to use a dedicated server for varnish that also can do load balancing just fine. It's not as effecient as a lightweight dedicated loadbalancer but until you need multiple varnish servers (way down the road) there is generally no point in using anything before it.
You should use Varnish in front of Apache Web Server. However it's fine to reside on Web Server itself and point Load Balancers to Varnish.
I need to be able to enable access through a firewall to a server for an app that is built atop Heroku. Unfortunately the IP's coming from Heroku's AWS instances seem to vary quite a bit. Is there a "correct" way of determining what subnet to expect from Heroku's AWS platform for an app?
As unfortunate as this is -- there isn't a good way to continuously get this information. On the AWS forums, however, the EC2 engineers tend to occasionally post their IP ranges (here is a recent example: https://forums.aws.amazon.com/ann.jspa?annID=1701).
The downside to this, however, is that it requires a lot of manual work.
There is no reliable way to accept Heroku public IPs in firewalls. Even if there was, you would be compromising your application and opening up an attack vector via other apps on Heroku.
The solution is to have an adequate authentication layer in your exposed services.
This question was asked a few years ago back when services like Proximo didn’t exist -- or weren’t known within the Heroku community.
Today, if you want your outbound traffic to come through a static IP which you can whitelist in your firewall, you can use a proxy service like Proximo (Fixie is another example).
There are a few downsides for using these services:
1) Intrusive Setup
Although the setup of these addons is relatively simple, it’s important to understand how they affect the application.
In case of Proximo, for example, you’ll be required to wrap your processes in a special utility.
This utility will “automatically forward outbound TCP connections made by the wrapped process over your proxy.”
2) Latency
To make your outbound traffic come from a static IP, these services route the traffic through a proxy. This means you’ll add another hop to your outbound communication.
I know that applications that run on Heroku usually aren’t very sensitive to network latency, but it’s important to take this issue into a consideration.
3) Uptime
Although these services are relatively stable, it should be noted that routing the traffic through a specialized third-party proxy adds another point of failure and may affect the overall stability of your applications.
To summarize, these services will help you solve the problem. However, I would consider using them as a temporary workaround, not a complete solution.
Rest assured that these kinds of fixes can hold for a very long time, but if security becomes increasingly more important for the applications you’re running on Heroku, it can be a good idea to start planning a migration to AWS.
If you’re wondering when can be the best time for your team to make the transition to AWS, I’ve shared a few notes here: “Will Heroku always be perfect?”
Hope that helps.
I have a simple Django app set up that I have been testing with Blitz.io.
When I test with many dynos I can get thousands of req/s on http:// X.com
When I switch to https:// X.com I get no more than 72 req/s no matter how many dynos.
And on https:// X.herokuapp.com/ I get more, but it still tops out at a few hundred req/s.
Is this a fluke that won't show up with normal use cases? A Blitz issue? A heroku issue? Would resources just be scaled up with demand?
We had a very similar issue with blitz.io and loader.io. See our blog post at http://making.fiftythree.com/load-testing-an-unexpected-journey/ for more details. It's very possible that blitz.io is the cause of your issue with SSL. We found that BlazeMeter could handle the load quite well.
If cost is a concern you might also want to try open source tools like siege or JMeter.
This answer assumes https://X.com uses the SSL:endpoint heroku addon to serve a custom cert.
The ssl:endpoint addon is implemented using an AWS Elastic Load Balancer. ELBs distribute load amongst their nodes using DNS. In my experience, each individual ELB node isn't particularly beefy, and SSL negotiation/decryption is non-trivial from a CPU perspective. So it's important when load testing to:
Re-resolve the hostname with each request to distribute load amongst all the ELB ips, especially as new ones are added in response to increased traffic.
Ramp up your load test very slowly. Amazon advises to increase load on ELBs at most by 50% every 5 minutes.
I'm not particularly suprised if the difference between HTTP/HTTPS capacity in terms of concurrent connections allowed on a single ELB node is substantial, which if you're pinned to one IP, may account for the difference you're observing.
I don't know the details of the https://*.herokuapp.com stack, but I'm not surprised at all that it can service quite a bit more https traffic than a cold ssl:endpoint ELB.
I have a small site on Heroku and am currently using Thin.
I've been vaguely aware of Unicorn but never felt like I had something that fit its "fast client" stipulation.
The readme and this link suggest that we're talking about only using Unicorn on a LAN (or maybe Lambdarail) but it seems like lots of people are using it for typical sites accessed by normal broadband and maybe even mobile networks. Is this true? What gives?
Unicorn is typically used behind an webserver/proxy like Nginx which receives the HTTP connection from the actual client, serves static assets and forwards dynamic requests to the backend server (Unicorn).
The webserver now acts as a client to Unicorn. Because Nginx (and for most cases Apaches mod_proxy) act as a store-and-forward proxy. I.e. they will first buffer the full response (or at least as much as fits into its buffer) before sending it to the client. And this nicely fits Unicorn's definition of a fast client. It hands the difficult task of caching and serving the data to slow clients to the webservers which have to do it anyway and thus can probably do it much better.
It also suggests that you should probably not run Unicorn directly facing to a client (unless your clients consume the data fast (e.g. on a LAN with non-congested clients and network).
We're using unicorn on heroku and having good results with it. What the unicorn site doesn't distinguish is that there's a difference between unicorn serving dynamic data vs. static assets. If you are offloading asset serving to a CDN, there's not much difference in unicorn with or without nginx in front. Once caveat to this - raw unicorn is vulnerable to an 'intentionally' slow client, such as might be introduced in a DDoS or other hack attempt.