I am working on my own first mobile app which should be native on Android, iOS and also a web app which runs inside the mobile browser. I plan to use NodeJS and nginx should serve my files, git should be used to push new code to the nodes. My only problem is i have only access to 1 root server with following specs:
Intel Core i7 2600 Quad core
32 GB RAM
2x 3 Terabyte Hard Drive in Software RAID
Unlimited Traffic
My plan is to build own NodeJS modules for the infrastructure of the app and render everything on the clientside with express & dust.js or modify.js.
My question is how to get mutiple nginx and NodeJS server´s as scale out setup, with less overhead to sqeeze most out out of this single machine?
For nginx, I think you just need to make sure you have the accept_mutex setting enabled and enable 4 worker processes
https://stackoverflow.com/a/3436969/266795
for node.js, use the built-in cluster module to run one process per core
In any case, that server is plenty capable of handling massive traffic for most sites/apps. Given this is your first mobile app, your odds of having traffic and workload to outgrow that much server power are near-zero. Don't sweat it.
Related
I am learning about containers (mostly Docker) since it is coming to windows. And the benefits seem very similar to IIS.
I work behind a firewall building apps for my company (Line of Business). We have a bunch of VMs that will each host a family of web services. One VM can have 20+ services running on IIS.
In that scenario what does deploying my services via Docker get me that I don't already get using IIS?
NOTE: I am completely new to Docker and only have developer level experience in IIS.
Docker is not a replacement for IIS - it can run an application like IIS within a container (I assume - not sure how this is going to work on Windows).
Docker is more like a replacement for a VM - the biggest difference between a VM and a Docker container is that the Docker container is MUCH lighter than a full VM. The usual claim that you see is that you can run many more Docker containers on a host than VMs (but your mileage may vary - some of the claims out there are bit... overstated).
Basically, the idea is this: a VM is a full virtual machine - a real OS running on top of virtual hardware (that looks real enough to the OS). Thus, you're going to get all the bells & whistles of an OS, including stuff you probably don't need if you're running IIS or another HTTP server.
Docker, on the other hand, just uses the host's OS but employs some useful features of the OS to isolate the processes running in the container from the rest of the host. So you get the isolation of a VM (useful in case something fails or for security) without the overhead of a whole OS.
Now you could run "20+ services" in a single Docker container, but that's not generally recommended. Because Docker containers are so lightweight, you can (and should!) limit them to one service per container. This gives you benefits such as
separation of concerns: your database container is just that - a database. Nothing else. And furthermore, it only handles the data for the application that's using it.
improved security: if you want to set it up this way, your database container can only be accessed from the application that's using that database.
limited stuff installed: your database container should be running MySQL only - no SSH daemon, no web server, no other stuff. Simple & clean, with each container doing exactly one thing.
portability: I can configure my images, pull them to a new host, and start up the container and I will be guaranteed to have the exact same environment on the new host that I have on the old one. This is extremely useful for development.
That's not to say you couldn't set something similar up using VMs - you certainly could - but imagine the overhead of a full VM for each component in your application.
As an example, my major project these days is a web application running on Apache with a MySQL database, a redis server, and three microservices (each a simple independent web application running on Lighttpd). Imagine running six different servers for this application.
Docker containers add support for .NET, SQL Server, and other workloads that would integrate with IIS. You also benefit from docker portability, as you could take your container images and run them on AWS or Azure,as well as privately. And, you get access to a large ecosytem of docker based tools . . . bottomline, the industry is moving to support the Docker API.
To host a web application on IIS in a container, a good starting point is using the latest IIS docker image for applications to be hosted on IIS, or if ASP.NET or WCF is the target platform, using the relevant images from these two platforms, which in turn include IIS:
https://hub.docker.com/r/microsoft/iis/
https://hub.docker.com/r/microsoft/aspnet/
https://hub.docker.com/r/microsoft/wcf/
I'm looking to setup Plastic SCM on a hosted server. Considering an Amazon EC2 instance for this. Any recommendations would be appreciated.
Minimum server specs for good performance
Tips on setup/config
Windows v. Linux
MySQL v. SQL Server v. SQL Express
Thanks!
We have extensively tested Plastic on EC2, in fact it is one of the main environments where we run Plastic SCM tests.
It all depends on the load that the server needs to handle.
Tiny server for occasional pushing and pulling
For instance, the demo server we use to handle the evaluation guide runs on a tiny EC2 instance, with Linux and MySQL and a total RAM of 512Mb. It is good for occasional pushing and pulling but of course not to be used under heavy load.
Big server for extreme load
On the other hand, we use a more powerful server to run 'load tests' with 300 concurrent bot clients doing about 2000 checkins per minute on a big repository. We detail the specs here. Basically, for higher perf:
20GB RAM
2 x Intel Xeon X5570
4 core per processor (2 threads per core) (2.7Ghz) – 16 logical cores – Amazon server running Windows Server 2012 + SQL Server 2012
Central vs distributed development
That being said, remember that if you setup a cloud server your bigger restriction for heavy load won't be the server itself but the network. If you plan to work in a centralized way (your workspaces directly work connected to the cloud server) then network will definitely be a consideration. Every checkin, every create branch, every switch to a new branch will mean connecting to the remote server and chances are that you won't get the same network speed you get on a LAN.
The other option is that you work distributed: you have your own Plastic repositories on the developer machines and you just push/pull to the central server. If that's the case it will work great and the requirements won't be high at all.
Specs for a 15-users team working distributed + Amazon EC2 server
If that's your case I'd go for:
Linux server + MySQL (cheaper than windows and works great)
Make sure you install the server with the packages we provide. We include our own build of Mono that will make wonders. Remember to set up the mono server to run with sgen (the latest Mono Garbage Collector).
Install MySQL (or MariaDB). Follow the instructions we provide here. Remember we do need to configure the max_allowed_packet in MySQL so it allows 10Mb packages (we use 4Mb but set it to 10). Everything is explained on the guide.
Use "user/password" security mode. Remember to configure the permissions so only your team can access :-)
For 15 users a m1.small instance will be more than enough (1.75Gb of RAM and a little bit of CPU).
Configure SSL and remove regular TCP so that your server is always secured. Check this.
We added an option in 5.4 that is able to store all data in an encrypted way, so even if the central repo is hacked in Amazon (unlikely) nobody will access your data.
Clients (I'll assume you're using Windows):
Install both client and server (remember we install a server to handle the local replicas of the repos).
Configure it in UP (user/password) mode.
Push and pull from the remote.
Alternatively you can also configure the SQLite backend (the one I've been using for 4 years now on Windows) which is extremely fast. By default, on Windows, a SQL Server Compact Edition (embedded) will be installed. It is ok too.
Connect to the server using SSL.
Hope it helps :-)
Does anyone have an experience running clustered Tigase XMPP servers on Amazon's EC2, primarily I wish to know about anything that might trip me up that is non-obvious. (For example apparently running Ejabberd on EC2 can cause issues due to Mnesia.)
Or if you have any general advice to installing and running Tigase on Ubuntu.
Extra information:
The system I’m developing uses XMPP just to communicate (in near real-time) between a mobile app and the server(s).
The number of users will initially be small, but hopefully will grow. This is why the system needs to be scalable. Presumably for a just a few thousand users you wouldn’t need a cc1.4xlarge EC2 instance? (Otherwise this is going to be very expensive to run!)
I plan on using a MySQL database hosted in Amazon RDS for the XMPP server database.
I also plan on creating an external XMPP component written in Python, using SleekXMPP. It will be this external component that does all the ‘work’ of the server, as the application I’m making is quite different from instant messaging. For this part I have not worked out how to connect an external XMPP component written in Python to a Tigase server. The documentation seems to suggest that components are written specifically for Tigase - and not for a general XMPP server, using XEP-0114: Jabber Component Protocol, as I expected.
With this extra information, if you can think of anything else I should know about I’d be glad to know.
Thank you :)
I have lots of experience. I think there is a load of non-obvious problems. Like the only reliable instance to run application like Tigase is cc1.4xlarge. Others cause problems with CPU availability and this is just a lottery whether you are lucky enough to run your service on a server which is not busy with others people work.
Also you need an instance with the highest possible I/O to make sure it can cope with network traffic. The high I/O applies especially to database instance.
Not sure if this is obvious or not, but there is this problem with hostnames on EC2, every time you start instance the hostname changes and IP address changes. Tigase cluster is quite sensitive to hostnames. There is a way to force/change the hostname for the instance, so this might be a way around the problem.
Of course I am talking about a cluster for millions of online users and really high traffic 100k XMPP packets per second or more. Generally for large installation it is way cheaper and more efficient to have a dedicated servers.
Generally Tigase runs very well on Amazon EC2 but you really need the latest SVN code as it has lots of optimizations added especially after tests on the cloud. If you provide some more details about your service I may have some more suggestions.
More comments:
If it comes to costs, a dedicated server is always cheaper option for constantly running service. Unless you plan to switch servers on/off on hourly basis I would recommend going for some dedicated service. Costs are lower and performance is way more predictable.
However, if you really want/need to stick to Amazon EC2 let me give you some concrete numbers, below is a list of instances and how many online users the cluster was able to reliably handle:
5*cc1.4xlarge - 1mln 700k online users
1*c1.xlarge - 118k online users
2*c1.xlarge - 127k online users
2*m2.4xlarge (with 5GB RAM for Tigase) - 236k online users
2*m2.4xlarge (with 20GB RAM for Tigase) - 315k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 400k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 312k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 327k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 280k online users
A few more comments:
Why amount of memory matters that much? This is because CPU power is very unreliable and inconsistent on all but cc1.4xlarge instances. You have 8 virtual CPUs but if you look at the top command you often see one CPU is working and the rest is not. This insufficient CPU power leads to internal queues grow in the Tigase. When the CPU power is back Tigase can process waiting packets. The more memory Tigase has the more packets can be queued and it better handles CPU deficiencies.
Why there is 5*m2.4xlarge 4 times? This is because I repeated tests many times at different days and time of the day. As you can see depending on the time and date the system could handle different load. I guess this is because Tigase instance shared CPU power with some other services. If they were busy Tigase suffered from CPU under power.
That said I think with installation of up to 10k online users you should be fine. However, other factors like roster size greatly matter as they affect traffic, and load. Also if you have other elements which generate a significant traffic this will put load on your system.
In any case, without some tests it is impossible to tell how really your system behaves or whether it can handle the load.
And the last question regarding component:
Of course Tigase does support XEP-0114 and XEP-0225 for connecting external components. So this should not be a problem with components written in different languages. On the other hand I recommend using Tigase's API for writing component. They can be deployed either as internal Tigase components or as external components and this is transparent for the developer, you do not have to worry about this at development time. This is part of the API and framework.
Also, you can use all the goods from Tigase framework, scripting capabilities, monitoring, statistics, much easier development as you can easily deploy your code as internal component for tests.
You really do not have to worry about any XMPP specific stuff, you just fill body of processPacket(...) method and that's it.
There should be enough online documentation for all of this on the Tigase website.
Also, I would suggest reading about Python support for multi-threading and how it behaves under a very high load. It used to be not so great.
I’m hoping some of you with experience using amazon EC2 could offer some advice… of course it’ll be subjective which is fine, I’m pretty sure your guestimate would be better than mine.
I am planning on moving all my client’s websites from shared hosting environments to Amazon EC2. They’re all pretty low traffic sites (the busiest site receives around 50 unique visitors a day). There’s about 8 sites, but I may expand this as I take on more projects and host more sites… current capacity planning is for say 12 sites.
Each site runs on ASP.Net (Umbraco CMS), and requires a SQL Server database.
My thoughts are one of the following:
Setup a Small Instance (1.7gb RAM, 1 EC2 Compute Unit), and run IIS and SQL Server Express on that server.
Setup 2 Micro Instances (613MB Ram each, Up to 2 EC2 Compute Units) – one for IIS, the other for SQL Server.
Which arrangement do you think would work the best for my requirements. I’ve started setting up a Micro instance with Server 2008, SQL Server Express, etc… and finding it not coping with the memory requirements, hence considering expanding. I could always configure on a Small instance, then export the AMI and fire it up in a Micro instance after, and do the same every time any serious changes to the server are required. I guess I could even do all updates etc on a spare Small Spot instance, then switch load that AMI up in a Micro and transfer the IP Address across, so I don’t need to do too much work on the production servers. I figure if I store all my website data files on EBS Volumes, then it should be fairly easy to move hosting between servers with minimal downtime, while never working on a production server.
I’m interested to know what you all think, and what strategies you employ for such activities as upgrades, windows updates, software installations, etc.
And what capacity do you think I’d need for my requirements.
Cheers
Greg
Well, first-up, Server 2008 doesn't play well in the 613MB RAM the Micro instance gives you. It runs, but it's a dog, and it barks louder the more services (IIS, SSE, etc) you layer on top. We using nothing smaller than a Small for Server 2008, and in fact typically do the environment config in a Medium and scale down to Small once the heavy lifting is complete and the OS is ready to use. Server 2003, however, seems to breathe easier on a Micro - but we still do the config on a larger instance and scale down.
We're running low-traffic websites on Server 2003/IIS6 in a Micro, with a Server 2008/SS install on a shared, separate, Small instance. We do also have one Server 2008/IIS7 Micro build running, but only to remind ourselves why we don't use it more widely. ;)
Larger websites run Server 2008/IIS7 in either Small or Medium instances, but almost always still using that shared separate SS instance for database services. We try not to deploy multiple SS installations, since it makes maintenance and backups more complex.
Stashing content and config on EBS Volumes is of course good practice, unless you like rebuilding the entire system whenever an Instance disappears. Snapshotting your Instances periodically is also good practice, since you can spin-up a new Instance from a baseline AMI and swap the snapshot in as a boot Volume for fast recovery in the event of disaster.
Hi
We have built 2 web sites on MOSS 2007 which have many customizations on pages and many ajax web parts.
Currently one of the sites is live and the configuration is as follows
Web server
Xeon 4 core processor
12 GB RAM
50 GB harddisk
SQL Server
Xeon 4 core processor
16 GB RAM
150 GB disk space
Servers are deployed as virtual machines on VMWare.
The live site is in test and it is open to public users and there are 1000 unique users per day for the site.
The problem is, the site is too slow and we are planning to put 11 web sites on the same configuration one of which will be a very popular site which we plan that 10000-15000 unique visitors per day.
What may be the problem ? Is this configuration too low for the current site and how can we plan for a configuration for the future projects ?
If you are expecting more than 15k unique users please follow the Large farm topology for sharepoint Kindly find more details here Physical topology
Off course, following best practices and physical topoly is a sound procedure!
Howeever just saying your sites are slow isn't saying to much at all. Did you look at all the slowlyness reasons, try hit F12 in IE8 or IE9 and look for network time for instance. How is your code performing? Could there be any other cause for slowlyness? how did you configure your virtual network settings?
Looking at your Hardware rightnow, this looks certainly reasonable for its job. How did you
configure backend SQL Dbase, is it running on fiber, ISCSI (and if so on what kind?)
did you perform IO testing on both machines during a period of time? what are the outcomes of this?
just saying it's slow doesn't say anything without knowing more detail. start with F12 in Ie8 and go on from there , try and look for the things that waist time, be it code or hardware IO holding the performance down.
DID you know Microsoft isn't advising hyperthreading on server machines in combination with SharePoint!