Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am unsure about which monitoring framework to use. Currently I am looking at either Nagios or Sensu.
Can anybody give me a good reference which shows a comparison of these two (or any other monitoring tool which may be a good solution)? My main intention is to scale-out on EC2. I am using Opscode Chef for system integration.
One important difference between Nagios and Sensu -
Nagios requires all the configuration for 1)checks 2)handlers but most importantly 3)hosts to be written in configuration files on the Nagios server. This means that each time one of the 3 above is changed (for example new hosts added, old hosts removed) you need to re-write the configuration files and restart Nagios.
Sensu is almost the same as the above, with one important difference -- when hosts are added or removed from your architecture (as is the case in most auto-scaling cloud deployments) -- the hosts themselves run a sensu-client that "subscribes" to different available checks. So when a new server comes into existence and says "I'm a webserver", the sensu-client running on it will ask the sensu-server "what checks should a webserver run on itself?" and run those.
Other than this, operations wise both Nagios (also Icinga) and Sensu are great and have a lot of facilities for checks, handlers, and visibility through a dashboard (YMMV).
From a little recent experience with Sensu and quite a bit of experience with Nagios I'd say both are excellent choices.
Sensu is definitely the new kid. It has a nice UI and nice API. It does however require Redis and RabbitMQ in your setup to work. So consider if you'll therefore want something to monitor those dependencies outside the sensu monitoring stack. Sonian provide Chef recipes for trying it out too.
https://github.com/sensu/sensu-chef
Nagios has been around for an awfully long time. It's generally packaged for most distros which makes installation simple and it has few dependencies. It's track record also means that finding people who know it or that have used it and can offer advice is easy. On the other hand the UI is ugly and programatic access is often hacky or via third party add-ons. Chef recipes also exist for Nagios:
https://github.com/bryanwb/chef-nagios
If you have time I'd try both, there is little harm in having two monitoring systems running as a trial. The main think to focus on, especially in a dynamic EC2 setup, is how easily the monitoring configuration files can be generated by your configuration management tool.
In terms of other tools I'd personally include something to record time series data, for instance requests per second or load over time. Graphs are a great help with monitoring, and can be used to drive alerting via Nagios or similar. Personally I'm a fan of both Ganglia and Graphite while Librato Metrics (https://metrics.librato.com/) is a very nice non-free option.
I tried using Nagios for a while: I got the feeling that the only reason that it's common is that 'everyone else uses it', because it's absolutely hideous to work with. Massively overcomplicated, difficult and long-winded to make it do anything new: if you find something it doesn't do, you know you're in for a week of swearing at crummy documentation of an archaic design. At the end of all your efforts and it's all working, it looks hideous. Scrapping it made me sleep better.
Cacti looks nice, but again it's unnecessarily complex when creating new plugins.
For graphing I'd recommend Munin: it's completely trivial to write new plugins in any language, there are hundreds available, and it looks reasonable. It's incredibly easy to install - one command to install and set one access rule, so works well for automated deployments, easy to wrap into a chef recipe. 2.0 is out soon and addresses most of its shortcomings (in particular adding variable update intervals, zoomable graphs, ssh transport). Munin can talk to Nagios for notifications, or it can do that itself, and it provides a basic dashboard.
For local process/file/service monitoring, monit is simpler and works better than god. I've not tried it with m/monit.
When compared with Sensu and Nagios... The pick would be Sensu monitoring systems.
Below is the are the main reasons,
1.Easy Setup.. There is lot of reduction of restarting of Clients.. which is major trouble in the large enterprise
2. Nagios Plugins can be used with the Sensu Ecosystem.
3. Scalable and easily for the Cloud environment.
Has anyone heard about Zabbix.It has lot many features and comes as a single package. I doubt the scalability
As long as enterprise it consists of databases, sap, network devices, webservers, filers, backup libraries.... there is barely an alternative to nagios (or it's cousins icinga, shinken)
Maybe one day everything will come out of clouds automagically but still a few years there will be static servers (physical or virtual, it doesn't matter) with a defined purpose resting at least for a few months. We will still have to monitor interface bandwidth, tablespaces, business processes, database sessions, logfiles, jmx metrics. All things where the plugin concept of the nagios world has an advantage.
Related
I have worked in an organisation where we used both puppet and ansible for configuration management... but I always wondered why would they use both tools ... what can puppet do that Ansible cannot do?
The only thought that came to my mind was:
- Puppet was used to check if the system is in the desired state at regular intervals; while Ansible was used to deploy one time things (code, scripts, packages etc)
Can someone please explain why would an organisation use both the tools? Can regular config check be done by Ansible?
Cheers
In the interest of full disclosure, I'm an upstream community contributing developer to Ansible but I will do my best to keep my response neutral.
I think this is largely opinionated and you'll get varied results depending on who you talk to but I think about it effectively like this:
Ansible is an automation tool and Puppet is a configuration management tool. I don't consider them to be direct competitors they way they seem to get compared by tech journalists except for the fact that there's some overlap in their abilities to perform the functions you would want out of a configuration management tool: service/system state, configuration file templating, application lifecycle management, etc.
The main place where I see these tools in completely different light is that Ansible performs automation of tasks, those tasks can be one of many "type" of things that you don't really expect from a configuration management tool, such as IaaS provisioning (AWS, GCE, Azure, RAX, Linode, etc), physical network configuration (Cisco IOS/ASA, JunOS, Arista, VyOS, Netscaler, etc), virtual machine creation/management, physical load balancer configuration (F5 BigIP) and the list goes on. Effectively, Ansible is your "automation glue" to create and automate a process that you and your team might have otherwise had to do by hand. It as a tool gets compared to things like Puppet, Chef, and SaltStack because one of the many "types" of task you would automate more or less add up to configuration management.
On the flip side though Configuration Management tools such as Puppet generally have a daemon running on the nodes, which needs to be provisioned/bootstrapped (maybe with Ansible), which has it's advantages and disadvantages (which I won't debate here, it's largely out of scope). One thing that daemon provides you is continuous eventual consistency. You can set configuration management authoritatively on the Puppet Master and then the agent will maintain that state on the systems and will provide reporting when it has to change something which can be wired up to alert monitoring to notify you when something's wrong. While Ansible will also report when something needed changing, it only does this when you run the Ansible Playbook. It's a push-model and not pull-model (nor is it a continuously running daemon that will enforce system state). This has it's advantages for reporting and the like. I will note that something like Ansible Tower/AWX can more or less emulate this functionality, but it's not a "baked in" feature. Just something to keep in mind.
Ultimately, I think it boils down to a matter of familiarity of technologies, desired feature set, and if you have a pre-existing investment (both time and money) into a toolchain. If you have been using Puppet for 5 years, there's no real motivation to fork-lift replace it with something else when you can use Ansible to augment it (there's even a puppet module in Ansible) and allow each to play nicely with each other, getting the features you want from both. However, if you're starting from scratch, then I think you may consider actually doing a Pros/Cons or feature comparison for what you really want out of the tool(s) to find out if it's worth the investment of picking up two tools from scratch or finding one that can fulfill all your needs and, while I'm biased towards Ansible in this regard, the choice ultimately lies on the person who's going to have to use the utility to maintain the infrastructure.
I think a good example of the hybrid approach is I know of a few companies that use Puppet for configuration management, and Ansible for software lifecycle release process where one of the tasks in their playbooks is literally calling the puppet module to bring all the systems into configuration consistency. The Ansible component in this is to automate/orchestrate between various systems, the basic outline of the process is this: start with removing a group of hosts from the load balancer, ensure database connections have stopped, perform upgrades/migrations, run puppet for configuration/state consistency, and then bring things back online in whatever order they've deemed appropriate. This all happens from a single command (or a click of a button in Tower/AWX).
Anyhoo, I know that was kind of long winded but hopefully it was helpful.
Both Marathon and Aurora are built on Mesos and supposedly are engineered for running long running services. My questions are:
What are their differences? I have struggled in finding any good explanations regarding their key differences
Do these frameworks run anything that runs on Linux? For Marathon they state that it can run anything that "is executable in a shell" but this is sort of vague :)
Thanks!
Disclaimer: I am the VP of Apache Aurora, and have been the tech lead of the Aurora team at Twitter for ~5 years. My likely-biased opinions are my own and do not necessarily represent those of Twitter or the ASF.
Do these frameworks run anything that runs on Linux? For Marathon they
state that it can run anything that "is executable in a shell" but
this is sort of vague :)
Essentially, yes. Ultimately these systems are sophisticated machinery to execute shell code somewhere in a cluster :-)
What are their differences? I have struggled in finding any good
explanations regarding their key differences
Aurora and Marathon do indeed offer similar feature sets, both being classified as "service schedulers". In other words, you hand us instructions for how to run your application servers, and we do our best to keep them up.
I'll offer some differences in broad strokes. When it comes to shortcomings mentioned in each, I think it's safe to say that the communities are aware and intend to fix them.
Ease of use
Aurora is not easy to install. It will likely feel like you are trailblazing while setting it up. It exposes a thrift API, which means you'll need a thrift client to interact with it programmatically (a REST-like API is coming, but is vaporware at the moment), or use our command line client. Aurora has a DSL for configuration which can be daunting, but allows you to easily share templates and common patterns as you use the system more.
Marathon, on the other hand, helps you to run 'Hello World' as quickly as possible. It has great docs to do this in many environments and there's little overhead to get going. It has a REST API, making it easier to adapt to custom tools. It uses JSON for configuration, which is easy to start with but more prone to cargo culting.
Targeted use cases
Aurora has always been designed to handle a large engineering organization. The clusters at Twitter have tens of thousands of machines and hundreds of engineers using them. It is critical to Twitter's business. As a result, we take our requirements of scale, stability, and security very seriously. We make sure to only condone features that we believe are trustworthy at scale in production (for example, we have our Docker support labeled as beta because of known issues with Docker itself and the Mesos-Docker integration). We also have features like preemption that make our clusters suitable for mixing business-critical services with prototypes and experiments.
I can't make any claim for or against Marathon's scalability. On the feature front, Marathon has build out features quickly, but this can feel bleeding edge in practice (Docker support is a good example). This is not always due to Marathon itself, but also layers down the stack. Marathon does not provide preemption.
Ownership
To some, ownership and governance of a project is important. It feel that in practice it does not define the openness of a project, but for some people/companies the legal fine print can be a deal-breaker.
Marathon is owned by a company (Mesosphere)
To some, this is beneficial, to others is is not. It means that you can pay for support and features. It also means that there is something to be sold, and the project direction is ultimately decided by Mesosphere's interests.
Aurora is owned by the Apache Software Foundation
This means it is subject to the governance model of the ASF, driven by the community. Aurora does not have paying customers, and there is not currently a software shop that you can pay for development.
tl;dr If you are just getting your feet wet with running services on Mesos, I would suggest Marathon as your first port of call. It will be easier for you to get running and poke around the ecosystem. If you are forming the 'private cloud strategy' for a company, I suggest seriously considering Aurora, as it is proven and specifically designed for that.
So I've been evaluating both and this is my summary.
Aurora
[+] also handles recurring jobs
[+] finer grained, extensive file-based configuration
[+] has namespaces so multiple environments can co-exist
[-] read-only UI, no official API
[~] file based configuration and cli based execution brings overhead (which can be justified with more extensive feature set)
Marathon
[+] very easy to setup and use
[+] UI that provides control and extensive API (even with features missing from UI at the moment)
[+] event bus to listen in on api calls
[-] handles only long-running jobs
[-] does not have separate deployment-run-cleanup steps, these if necessary need to be combined in a script of one-liner
Even though Aurora has better capabilities, I prefer Marathon due to Auroras complexity/overhead and lack of UI (for control) & API
I have more experience with Marathon.
Ideological:
Marathon is a relatively tested product that is used in production at AirBnB. Aurora is an early Apache project (so YMMV).
Both are open source and active. Feel free to contribute pull requests or file issues!
Technical:
Marathon doesn't schedule batch tasks or cron jobs
Marathon has a friendly UI and better health indicators (in 0.8.x)
In regards to your second question, you can run any command or docker container, and Mesos will do the resource isolation for you. If you have 50% CentOS nodes and 50% Ubuntu nodes and you run a task that executes apt-get, the task will have a 50% chance of failure. Mesos and Marathon have no awareness of the actual machines.
Disclaimer: I don't have hands-on experience with Aurora, only with Marathon.
ad Q1: In a nutshell Apache Aurora is capable of doing what Marathon + Chronos can provide, that is, schedule both long-running services and recurring (batch) jobs; see also Aurora user guide.
ad Q2: Yes, anything. Currently based on cgroups and Docker but hey, you can roll your own.
We have some 20 or so servers in EC2, most are dynamically spawned (scaling groups).
We're looking for a solution to monitor the uptime of our application.
As an added bonus this solution could also extend to actually monitoring the servers involved so its easy to go back in time and see what happened just before a downtime or whatnot.
We're looking for a hosted solution ideally, and it should be easy to scale with it (it needs to somehow dynamically deal with servers being added/removed with no interaction from us).
Anyways, hoping for some recommendations from you guys.
A bit of background ...
We're currently using a custom Nagios setup, its been reduced to basically doing a simple http check now that the servers have become fully dynamic. We've already been using PagerDuty to deliver the pages. It does ok, but for the maintenance cost we could well be using a http check # Server Density of Pingdom.
I've looked briefly at ServerDensity, and it does look promising, I especially like their install mechanism of just dumping their files into your AMI and it takes care of the rest.
I'd like to know what options there are tho before diving deeper into any particular solution.
We use a combination of Server Density for monitoring and PagerDuty for alerting. The two work quite well together.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a couple of different projects running for the moment - some PHP apps and a few WordPress instances, which all currently are kept at a web hosting company. The contract period time is about to end and I would lie if I wouldn't say that I really had considered making the switch onto a VPS server in the cloud with the prices getting really great.
I am totally in love with the fact of being able to turn the performance up or down when demand increases, or goes away and thereby cut the costs.
With my background as a PHP developer, with only a little hint of Linux (ubuntu) knowledge, I am thoroughly concerned about the security if I should run my own VPS.
Sure, I am able to install and get things running with my current knowledge (and some help by Google), but is it realistic nowadays to expect that my server (LAMP, really) will stay secure by running out-of the box stuff and keeping it up-to date?
Thanks
Maintaining your server is just one more thing to worry about, and if you're a developer, your focus should probably be on development. That said, it needs to make financial sense to go the managed route. If you're just working on toy projects (I've got a $20/month VPS that I use for my personal projects and homepage, and it's pretty hands-off) or if you're just getting off the ground, VPSes have the great advantage of being cheap and giving you lots of control of your environment. You can even mitigate some of the risk by keeping aggressive backups, since it's easy to redeploy a server quickly.
But, if you get to the point where it won't affect your profitability to do so, you probably should seriously consider getting someone else to take care of infrastructure for you either by buying managed hosting services or hiring someone to do it for you. It all depends on what you can afford to lose if you get rooted and how much time you can afford to invest in server management and recovery as opposed to coding.
I wouldn't. We did the same thing because the non-managed VPS are sooo cheap, but unless you really need to install applications or libraries that are not part of standard shared host setups, in my experience, being a pure developer as well, the time spent is never worth it.
Unless, of course, it is your own tiny blog or you just want to play around.
But imagine you (or whichever automation you use) update php, and for some reasons it fails (or worse, you render your current installation unusable) - are you good enough to handle this? And if so, how long will it take you? Do you have a friend at hand who can help?
We, as a small company, are getting rid of our VPSs step-by-step and moving back to our reseller package, hosted at a good hosting provider.
Good question, though.
As for security, I have successfully used Amazon EC2 for a number of things. It's not the cheapest around, but quite comprehensible in shared data stores between instances, connection to S3, running hosts at different hosting centers etc, grouping hosts in different clusters, etc etc.
They have a firewall built in, where you can turn all things off except say, TCP traffic on port 22 for SSH and 80 for web. That combined with something like Ubuntu, where you can easily run updates without worrying much about breakage, is probably all you need from a security point of view.
You need consider cloud computing as a statement of avaibility, not cost. You can be seriously surprised about the cost at the end.
I already have optioned to use VPS hosting. Good VPS hosting is costly, these days you may find cheap dedicated host compared to VPS. Have look at hivelocity.com – I like their services.
About security, most VPS host company takes care of security for you at the infra-structure level, and some may use antivirus software on files. On dedicated host, you need to take care by yourself or contract managed support services: a tradoff.
LAMP server is cheap everywhere. You can hire a private VPS and have some security, you may count on services like DNS hosting too – this is trouble to configure. VPS can be your first step as you're doubtful and has no experience on hosting. Thereafter when you find out the advantages of having your own server, you'll migrate straight to dedicated server.
What is acceptable from a security standpoint will differ depending on the people involved, what you want to secure and requirements of the product/service.
For a development server I usually don't care so much, so I usually do some basic securing of the server and then don't pay attention to it again. My main concern is more of someone getting a session and using my cycles to run something. I don't normally care about IP so that's not a concern for me.
If I'm setting up a box that has to meet Sarbanes-Oxley, Safe Harbor, or other PII/PCI standards I must meet I would probably go managed just because I don't want the additional security work load.
Somewhere in between is a judgment based on if I want to commit the required time to secure the server to the level I want it secured at. If I don't want to do it myself I pay someone to do it.
I would be careful about assuming your getting a certain level of security just because your paying someone to manage your server. I've come across plenty of shops where security is really an afterthought.
If I understood you correctly, you are considering a move from a web host to a VPS, and wonder if you have the skills to ensure the OS remains secure now that it's under your control?
I guess it's an open-ended question. You are moving from a managed environment to an unmanaged environment, and whether you maintain your environmental security is up to you. If you're running your own server then you need to make sure that default passwords aren't in use (for the database, OS and any services on top), patches are quickly identified and applied, host firewalls are configured properly and suspicious activity alerts are immediately sent to you. Hang on, does your current web host do any of this for you? Without details about your current web host and the planned VPS, you are pretty much comparing apples to oranges.
BTW, I would be somewhat concerned about my LAMP server security, but frankly I would be much more concerned about development errors (SQL injection, XSS) and the packages running on top of my server (default passwords + dev errors).
For a lamp stack, I would probably not do it. It would be a different case if you were using a Platform-as-a-service provider like Windows Azure - by my own experience there is minimal operational overhead and you just upload the app and it runs in a vm (and yes it supports php).
But for Linux there are no such providers that I know of, which means you will have to manage the Operating system, the app frameworks, the web server and anything else that you install on the instance. I wouldn't do it myself. I would consider the options as hiring a person with the relevant experience to do this for me vs the cost of managed services from the vps provider and go with one of those two.
Rather than give you advice about what you should do, or tell you what I would do, I'm just going to address your question "is it realistic nowadays to expect that my server (LAMP, really) will stay secure by running out-of the box stuff and keeping it up-to date?" The answer to this question, in my opinion, is basically yes.
dietbuddha is right, of course: what constitutes an acceptable level of security depends on the context, but for all but the most security-sensitive purposes, if you're using a current (i.e. supported) distro, with sane defaults, and keeping up with the security updates, then you ought to be fine.
I have two VPSs, each of them currently runs Ubuntu 10.04 server. On one of them, I spend some time installing and configuring tiger, tripwire, and taking various other security measures. On the other, I simply installed fail2ban and set security updates to automatic, and left it at that. They've been running for a few years, now, and I've had no problem with either.
You should do it for fun and for learning purposes. Other than that, don't; you're wasting your own time and a lot of other people's time.
I say this because I've wasted serious time setting up an EC2 instance to host my SVN server and a few other things. I mean, I loved setting everything up and messing w/ the server; I learned a lot especially because I'd never done anything a LINUX server before. However, looking back, I wasted a ton of time and had to keep buggin #Jordan S. Jones for help.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What are the advantages and disadvantages of self-hosting something like a svn repository? All links and ideas are appreciated.
Off the top of my head:
Advantages of Self-hosting
Flexibility. On my own machine I can install whatever I want. If I would like to use a vcs like Bazaar and use Loggerhead instead of Trac, then right now there isn't really much choice beyond Launchpad, which has its warts
Save money. Costs add up over time especially for large teams
The free plans offered by sites like Assembla are not private. Anybody can have access to your code
Advantages of Paid hosting (ie: GitHub, Assembla, Google Code)
Robustness. You don't need to worry about your server catching fire because it's become somebody else's problem.
Less hassle. Don't need to be do all the system administration and tweaking of conf files. Instead you can just focus on the coding
For production you should only use self hosting if you are professional sys admin. Can you answer yes to following questions (a bit linux oriented, but you should get an idea):
Can you react to system failure in minutes (I mean you need sleep at least. Do you have somebody to look after system while you are asleep?)
Can you spot a system break?
Can you remove exploits from your system?
Can you recompile kernel. If you can't remove exploits?
Can you configure the system for optimal performance?
Are you willing to pay for UPS, backup storage and alternative internet provider?
If you can answer yes to these questions benefits are very atractive and I would go with it.
On the other hand hosting development environment can be managed by administrator of any level especially when there are such easy to use servers like Ubuntu.
You specifically asked about hosting a subversion repository, so the first disadvantage that comes to mind is data protection. I personally would never trust a third-party with my source code, except for open source code or code of an unimportant side project. Source code is a very important asset for a ISV, so trusting a third-party to protect your source code doesn't sound like a good idea.
And even if it's not about source code, outsourcing other critical parts of you business such as email, accounting/invoicing* is just asking for trouble. And it's not like you don't have to care about backups anymore when you outsource your data hosting. You still should backup your data, in case the hosting company screws up.
*) With outsourcing accounting/invoicing I mean all those new hosted invoicing apps, not getting help from an accountant of course
I find the web interface of external hosts to be hassles. Plus you can have however much space you want on your machine. Like you said, the maintenance can be a burden for self hosting though.
How big is your project? If it is not too big just get an account at http://www.beanstalkapp.com
That is what I did. I do not have to worry about any setups and can focus on the actual development.
If your situation is more complex self-hosting is worth considering. But keep in mind that you would have to take of backups too and that an update of the server screw a lot of things up.
This ties into the server catching fire, but one key advantage of external hosting is that it's (presumably) backed up automatically. Doing your own backups is a hassle, and ends up being less reliable than what you'd get from Google.
With self-hosting there comes great responsibility.
you have to backup everything
you need spareparts for your hardware
if you have stuff that is important you need redundant hardware
you don't have real holidays. if something breaks you have to fix it
In addition to what others have already mentioned, there are also benefits specific to using cloud services by companies like Amazon, Yahoo, Google, Microsoft, etc.
Despite what some might claim, self-hosting is not inherently "safer." In most cases, it's quite the opposite actually. This is because most small-to-medium-sized companies do not have the resources to provide the level of reliability and redundancy that mega-corporations like Microsoft or Amazon can. Unless you are hosting source code for a top-secret defense project or other projects where the threat of espionage is very real, the greatest threats to your code and your business are more mundane things like server/network downtime.
Redundancy: Cloud services provide levels of redundancy that most businesses simply cannot obtain on their own. This includes data redundancy (back-ups/RAID), hardware redundancy (components/equipment), and geographical redundancy (multiple server locations across the globe). If a natural disaster hits your city, is your data going to be safe?
Multi-tenancy: Each small business by itself cannot afford 24/7 support staff and multi-million dollar equipment. But pooling their resources together through a Cloud service affords them (through centralization and better resource utilization/higher efficiency) access to a much higher level of service.
Security: Related to multi-tenancy, by centralizing the data of thousands of businesses, this allows security resources to be much more tightly focused.
Lastly, it should be noted that most commercial hosting providers offer co-location and dedicated hosting. And even cloud service providers allow customers to configure their "server" however they want, and installing/running whatever applications on it they want. So you can have a great deal more freedom than that offered by $10/month web hosting.