I have been playing with LittleProxy for a while and I am happy with it. I am not wondering how scalable the solution is, and if it is mature enough to be used in a production environment?
Do you use it in production?
Regards
Gilles
We handle all outbound traffic through littleproxy on PRODUCTION. It might be something like 20 000 reqs/min on one instance.
Related
Just looking for opinions on which deployment better is more suitable for laravel apps. We currently deploy to EC2 and have recently been looking at modernising our approach.
Discussing with the dev teams there seems to be a real divide between which technology to use. While I can see the pros and cons of each approach I am edging towards a containerized deployment as it provides a more comfortable dev environment and tech like ECS Fargate can remove a lot of the infrastructure maintenance overhead.
Serverless while it maybe quicker to scale seems to have certain limitations in terms of response size. Some of our APIs have pretty huge response bodies (a problem for another day). API Gateway also has some limitations in the timeout which I think when we are under heavy load could cause issues for us.
Does anyone recommend one deployment method over the other? What experiences have you had? Anything to keep an eye out for?
I understand that there are limitations with regards number of connections etc. but it doesn't have a tick next to "Production Ready"
What does this mean? I cannot use it for production?
There's really not much documentation on this. It's a play to get you paying by suggesting that general performance will be subpar until you do. Which is nonetheless probably very true. Based on performance I've had with Ignite in the past, I certainly wouldn't want to have more than a few people connecting at a given time. Plus, you'll almost certainly tap out of the allowed 5mb of storage very quickly if you're doing anything resource intensive anyway.
I did not see that answer in the documentation, https://golang.org/pkg/net/http/.
It seems pretty complete, but typically I find the built in web servers are never recommended, such as Python, PHP, etc., for anything but development.
Yes. It is a 'production' server if you use it as such. There is no reason why you would not. It is was made with the intent of you using it for real production applications, not just for testing and playing around with the language.
I have looked around a bit on websockets, and I have a pretty concrete question:
Can websockets actually be scaled over different servers, or are they always limited to one single server?
It seems that this is an issue I've repeatedly bumped into in the docs I have found, but maybe they were incomplete or things evolved. It seems for example as heroku even doesn't support websockets at all(?)
It depends on your application, but in general, there is no reason you can't load balance websocket connections to multiple machines in the same way as any other TCP connection.
Are there any tools that would enable me to load-test my server and tell me how much traffic it could roughly handle?
By traffic I mean how many requests per second it can consistently serve without timing out.
I realize that every server is different, and so is every application that runs on that server. That's why I thought this route may be the way to go.
Thanks a bunch!
For a very simple benchmark on web servers (if your request is the same every time), you could use ab. A very simple tool, but it gives some interesting statistics nonetheless.
If you dont mind going the paid software route, then LoadRunner is a very good choice, IMO.
I have used LoadRunner in the past for doing this kind of measurement for multiple web and non web applications.
And I like The Grinder. The only downside (but I don't know if others are able to do this) is that it doesn't replay well the ASP.NET hugely long generated URLs