Kestrel server vs HTTP.sys [closed] - performance

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In Dotnet core, there are two built-in servers Kestrel and HTTP.sys.
I would like to know the differences between those two servers and when to use a particular server when it comes to performance, reliability, micro-service friendly, etc.
Answer: Kestrel vs. HTTP.sys from the official Microsoft docs.

See Kestrel vs. HTTP.sys from the official Microsoft docs.
Main differences are that HTTP.sys is windows only while kestrel can run on linux as well. That also means that HTTP.sys works with windows authentication "out of box" with few settings whereas kestrel needs a lot more to set it up. Performance wise they are similar with http.sys being a bit faster since it is optimized for windows. Also the base for IIS is HTTP.sys.
Reliability not only depends on the server but the infra it is on. I.E if you put both in docker with kubernetes they will be reliable and scalable since you will have containers to take care of that part.
Now i have microservices on both and they are very friendly and i use them for different purposes, environments depending on the service in question.
Also to mention that for public facing services i use reverse proxy anyway i am not familiar with how the two act in that role. Having said that Microsoft recommends HTTP.sys if you have a front facing service since it is more resilient to attacks out of box, but like I said since my services are behind a reverse proxy that handles those requests cannot verify the claims.
hope this helps a bit

Related

How to run multiple Golang apps on a dedicated server? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm a Go newbie and I'm stuck trying to figure out how to deploy my apps on a dedicated server.
When I worked with PHP I used the standard setup:
But I'm confused as to how I'm supposed to deploy my Go apps.
I know I can run a single app on port :80 but how do I run multiple apps?
Is this the general idea:
Or should I be using something like this:
Can someone clarify the way most Go developers deploy their apps?
Thanks!
I'd highly recommend going with Caddy. You can set up you server with all the apps on different ports (esp. higher ports i.e. 1024 and up, so they don't need to be root), and then use proxy directives to forward traffic to your apps and such. As a bonus, you also then get free Let's Encrypt certificate support!
https://caddyserver.com/docs/proxy for more on the proxy directive
If you need multiple apps to serve HTTP requests, you should definitely consider using Nginx as a reverse proxy. You can forward all requests on a given route, say /api to one service and /ui to a second service, provided they are bound to different ports.
You might want to look a Traefik (https://traefik.io/), a go based web proxy

PaaS/hosted PaaS without restrictions [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm looking for nice PaaS that could run applicatons which:
Listens to non-80 external port (25th, its a SMTP server)
Writes to the persisting filesystem
(its 2 different applications, so PaaS I'm looking for dont have to have both features)
I tried different PaaS and IaaS:
Heroku: no/no
OpenShift: no/yes
AppFog: apparently no/no
AWS: yes/yes - but its IaaS
I understand, that listening to 25th port is not really popular feature, so I'm open to host some PaaS without strict restrictions on say AWS. Is there is such?
I don't think OpenShift is going to give you exactly what you are looking for however as you have denoted you will have persistent storage.
As you have denoted port 25 is not one of the external ports that your application can bind to with OpenShift. The reason for this is because in too many situation the use of port 25 leads to accounts not complying with the Acceptable Use Policy.
However there are mail alternatives for SMTP such as the use of mailgun, this service works over port 80 and service as an SMTP service.
In this way OpenShif can meet both of your requirements (kinda).
If you are open to hosting the PaaS yourself, you can try out Cloudify. It's open-source, and your application not limited in what it can do on your instance.
Disclaimer: I work for Gigaspaces, which develops Cloudify,
You may check out http://paasify.it. It's a comparative list of current PaaS vendors that I have compiled.
As for persistent storage select 'Filesystem' under Services. Possible PaaS include Clever Cloud, HP Cloud Application Platform as a Service, Stackato and Static.
I'm not aware which do allow listening on port 25. I suggest using a addon service (e.g. mailgun), like SFERICH suggested.
Cheers Stefan
I just got into the following article and your question. I hope it can solve your demand for flexibility:
Dokku on Digital Ocean

Clearing up misconceptions about amazon(EC2) and rackspace [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm friends with an owner of a small creative business (with multiple departments) and until now they have been using a dedicated server (via a 3rd party) for a lot of internal projects and they've been known to iframe a few small dev projects (like photo galleries, one page sites etc...) off and on for some of their clients (some with hi traffic sites).
They're looking to switch from the dedicated server to a cloud environment. The owner is enamored with amazon's cloud services, but still wanted some alternative options they also want the new environment to mirror the current one as much as possible (linux/centOS, PHP 5.3, mysql databases) but with the ability to scale when desired.
So the misconceptions I need cleared up and questions I have are:
1) I always assumed amazon's cloud service was more suitable for high end high traffic complex web application (Netflix, pinterest, instagram etc...) rather than the typical server use listed above. Is this correct?
2) Is it possible to mirror their current setup on amazon?
3) If number 1 is not true, but they instead chose rackspace, could they run heavy web apps like Netflix, pinterest, instagram on a rackspace cloud server if they ever decided to do something that advanced (is rackspace scaleable in the same way ec2 is)?
1) Amazon AWS is also suitable for this environment, or even smaller ones (they offer instances as small as "Micro", which are far less capable than what you are describing all the way up to GPU compute clusters).
2) Yes. That is a very common setup for an AWS-based solution. In fact, I recently migrated something similar from Rackspace to AWS.
3) #1 is true. However, you can certainly mix what runs on Rackspace and in the AWS cloud. Keep in mind latency and security issues if the two component solutions need to communicate with each other. Rackspace also has a cloud offering, but it is not as mature as Amazons.

XMPP server in Amazon EC2 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Which XMPP server would you recommend for use in Amazon Web Services, running on EC2 instances?
It should scale, with automatic (or at least easy) clustering being very useful - it's scaling should also support an XMPP server component. It would be nice if the automatic scaling could work with Amazon Auto Scaling.
Which XMPP server (or even a different cloud offering) would you use? As far as I can tell OpenFire and Ejabberd are the most popular choices, but I'm concerned they won't scale well on EC2 instances.
To my knowledge there is no XMPP server with automatic clustering.You should be aware that automatic clustering with XMPP is extremely difficult because it is a connected protocol and it cannot be totally transparent, unless you only want to support HTTP (XMPP over BOSH).
You will end up with question like: what do you do with running TCP/IP connection when you want to remove a node ? Do you want to migration session when adding a node ? What do you do with running TCP/IP connection.
ejabberd has good clustering support however and it runs extremely well on EC2 and is very stable. This is your best bet.
OpenFire to my knowledge is not an option with no real, largely available clustering support.

Hosting, deploying and running web applications in the cloud [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
So far I've read some blog articles about cloud computing and services for hosting applications in the grid.
If I'd wanted to have a web application running in the cloud for as little cost as possible, what would be the best solution?
Let's assume the following configuration:
J2EE web application
Any free database (MySQL, PostgreSQL)
Any web container to deploy the web application to
What application stack would you suggest to be the best combination of services to
host
deploy
run
web applications?
As an additional requirement, the services chosen shouldn't require a lot about server management like firewall settings etc.
This space is changing very quickly right now so I think you will find a lot of different good answers. If I where to do something on the cheap right now I would probably pick the following stack:
Web server: apache
App server: tomcat - use the clustering support if you need to grow or split at the apache level or even introduce a load balancer box at the very front
DB server: MySql - mainly because it is easy to cluster
Platform: scalr - The cloud setup is simple and cheap. It uses Amazon's cloud on the backend and that gets you a lot of extras like putting servers in different datacenters for redundancy.
Now you can add in or remove parts of this. You may not need a web tier out there and can just expose tomcat directly. You may need EJBs and in that case you can just fire up more nodes for that and create another tier. You may want to add a tier for load balancing in front of apache. You may want to use the Amazon cloudfront service to push static files to their edge network.
I have investigated Amazon's ec2 solution recently. It is quite good and there are many pre-built boxes that you can use if you find one that suits your need. I think there will still be some server management involved...you cannot get away from that. But the pre built boxes will make it easier.
The cost is reasonable as you only pay for what you use.
[EDIT] The pre-built boxes are called Amazon Machine Images (AMIs).
I think you can get no where closer to Jelastic. It has all the stuffs that #carson mentioned. Specially I will mention their unique web console and they do not have any dependency for any API or console to be installed. I use their platform for many of the clients for my startup. Also additionally you get a nginx support for load balancing and configuring it right away from the console.

Resources