I would like to monitor the following on Amazon ec2 instances loaded with amazon linux, every X minutes :
disk statistics
process stats (similar to what top does)
ram usage
check if my scripts are running fine
should I use my own scripts and things or are there any tools that already achieve this ?
I searched and there was a suggestion about munin
what seems to be the better approach ?
Here is a great article on scale. About halfway down, the author lists monitoring tools and how they differ.
http://highscalability.com/blog/2010/8/16/scaling-an-aws-infrastructure-tools-and-patterns.html
We've implemented Cacti, it was very easy, it creates all sorts of graphs/reports (most of what you mentioned). Munin is one that he lists as well, but we have not tried that solution yet.
Related
I am learning Mapreduce and Hadoop now. I know I can do some tests and run some samples on a singe node. But I really want to do some practice on a real distributed environment. So I want to ask :
Is there a website which can offer a distributed environment for me to do some experiments?
Somebody told me that I can use Amazon web service to build a distributed environment. Is it real? Does someone have such an experience?
And I want to know how you guys learn hadoop before you use it in your work?
Thank you!
There are a few options:
If you just want to learn about the Map/Reduce paradigm, I would recommend you take a look at JSMapReduce. This is embedded directly in the browser, you have nothing to install, and you can create real Map/Reduce programs.
If you want to learn about Hadoop specifically, Amazon has this thing called Elastic Map Reduce which is essentially Hadoop running on AWS, so this enables you to write your Hadoop job, decide how many machines you want in your cluster, which type of machines you want, and then run it, and EMR will do everything, bootstrap the machines for you, run your job and store the results on S3. I would recommend looking at this tutorial to get an idea how to setup a job on EMR. Just remember, EMR is not free, so you'll have to pay for your computing resources.
Alternatively if you're not looking to pay the cost of EMR, you could always setup Hadoop on your local machine in non-distributed mode, and experiment with it, as described here. Even if it's a single node setup, the abstractions will be the same as if you were using a big cluster, so it's a good way to get up to speed and then go on EMR or a real cluster when you want to get serious.
Amazon offers a free tier, so you can spin up some vms and try experimenting that way. The micro instances they have aren't very powerful, but are fine for small scale tests.
You can also spin up VMs on your desktop if it is powerful enough. I have done this myself using VMPlayer. You can install any flavor of Linux you like for free. Ubuntu is pretty easy to start with. When you setup the networking for your VMs, be sure to use bridged networking. That way each VM will get its own IP address on your network so they can communicate with each other.
Well, it's maybe not about '100% online' but should give really good alternative with some details.
If you are not ready to pay for online cluster resources (such as EMR solution mentioned here) and you don't like to build your own cluster but you are not satisfied with single node setup, you can try to build virtual cluster on powerful enough desktop.
You need minimun 3 VM, I prefer Ubuntu. 4 is better. To see real Hadoop you need minimal replication factor 3. So you need 3 dataNode, 3 taskTrackers. Well, you also need nameNode / JobTracker - it could be one of nodes used for dataNode but I'd recommend to have separate VM. If you need HBase, for example, you again need one Master and minimum 3 RegionServer. So, again, you need 3 but better 4 VM,
There is pretty good free product, Cloudera CDH which is 'somewhat commercial' Hadoop distribution. They also have manager with GUI and simplified installation. BTW they have even prepared demo VMs but I never have used them. You can download everything here. They also host lot of materials about Hadoop and their environment.
Alternative between completely free solution with VMs on desktop and paid service like EMR is your virtual cluster built on top of one dedicated server if you have spare. This is what I personally did. One physical server powered by VmWare free solution, 4 virtual machine, 1 SSD for OS and 3 'general' HDD for storages. Every VM runs Ubuntu 11.04 (again free). Cloudera manager free edition, CDH. So everything is free but you need some hardware that is often available as spare. And you have playground. OK, you need to invest time but by my mind you will get greatest experience from this approach.
Although I do not know much about it, another option may be Greenplum's analytic workbench (1000 node cluster w/ Hadoop for testing): http://www.greenplum.com/solutions/analytics-workbench
We want to use an ActiveMQ master/slave configuration based on a shared file system on Amazon EC2 - that's the final goal. Operating system should be Ubuntu 12.04, but that shouldn't make too much difference.
Why not a master/slave configuration based on RDS? We've tried that and it's easy to set up (including multi-AZ). However, it is relatively slow and the failover takes approximately three minutes - so we want to find something else.
Which shared file system should we use? We did some research and came to the following conclusion (which might be wrong, so please correct me):
GlusterFS is often suggested and should be supporting multi-AZs fine.
NFSv4 should be working (while NFSv3 is said to corrupt the file system), but I didn't see too many references to it on EC2 (rather: asked for NFS, got the suggestion to use GlusterFS). Is there any particular reason for that?
Ubuntu's Ceph isn't stable yet.
Hadoop Distributed File System (HDFS) sounds like overkill to me and the NameNode would again be a single point of failure.
So GlusterFS it is? We found hardly any success stories. Instead rather dissuasive entries in the bug tracker without any real explanation: "I would not recommend using GlusterFS with master/slave shared file system with more than probably 15 enqueued/dequeued messages per second." Does anyone know why or is anyone successfully using ActiveMQ on GlusterFS with a lot of messages?
EBS or ephemeral storage? Since GlusterFS should replicate all data, we could use the ephemeral storage or are there any advantages of using EBS (IMHO snapshots are not relevant for our scenario).
We'll probably try out GlusterFS, but according to Murphy's Law we'll run into problems at the worst possible moment. So we'd rather try to avoid that by (hopefully) getting a few more opinions on this. Thanks in advance!
PS: Why didn't I post this on ServerFault? It would be a better fit, but on SO there are 10 times more posts about this topic, so I stuck with the flock.
Just an idea.... but with activemq 5.7 (or maybe already 5.6) you can have pluggable lockers (http://activemq.apache.org/pluggable-storage-lockers.html). So it might be an option to use the filesystem as storage and RDS as just a locking mechanism. Note I have never tried this before.
I'm using EC2 to offload some computing tasks from my desktop - basically running some jobs that would take hours or days on a desktop, nothing particularly large scale, so I'm not looking to setup anything too complex - it should be able to run on a single instance running ubuntu. I know this is stretching the use case of EC2 and there are better long term solutions than using EC2 in this way, but I'll address that at a later point in time.
However, if I use standard, high memory, or high cpu ubuntu server instances, even the XL classes (e.g. m2.4xlarge) are fairly slow in terms of their computing capability, and the cluster compute instances are probably more appropriate for my needs. However, I can't use the cluster compute instances unless I choose the "ubuntu server for cluster instances" images, which are lacking in preinstalled libraries and software. I can install the packages piece-by-piece but this seems like a roundabout way of doing something they're not intended for (I tried swapping an EBS volume from a regular server instance into a cluster instance, but the instance wouldn't boot when I did that).
Basically the bottom line is I would like to use the hardware of their cluster compute instances but not use the stripped down OS so I can run some single instance jobs with a minimal setup. What's the best way to go about this?
You can try to use the CloudInit methods to install your required packages on bootup. Basically you write a shell script that is executed every time the instance is started.
Did you look into bootstrapping? A CloudFormation template might be an answer.
We have some 20 or so servers in EC2, most are dynamically spawned (scaling groups).
We're looking for a solution to monitor the uptime of our application.
As an added bonus this solution could also extend to actually monitoring the servers involved so its easy to go back in time and see what happened just before a downtime or whatnot.
We're looking for a hosted solution ideally, and it should be easy to scale with it (it needs to somehow dynamically deal with servers being added/removed with no interaction from us).
Anyways, hoping for some recommendations from you guys.
A bit of background ...
We're currently using a custom Nagios setup, its been reduced to basically doing a simple http check now that the servers have become fully dynamic. We've already been using PagerDuty to deliver the pages. It does ok, but for the maintenance cost we could well be using a http check # Server Density of Pingdom.
I've looked briefly at ServerDensity, and it does look promising, I especially like their install mechanism of just dumping their files into your AMI and it takes care of the rest.
I'd like to know what options there are tho before diving deeper into any particular solution.
We use a combination of Server Density for monitoring and PagerDuty for alerting. The two work quite well together.
I'm using LIBSVM for regression analysis. Works like a champ. But a 3-parameter grid search to optimize parameters for the model maxes out all four cores on my 2.66 GHz Intel box, and I still have to wait a couple of hours to generate a single model.
This seems like a job for Amazon EC2.
I've seen plenty of tutorials and introductory material on using EC2 for web-related tasks.
But what if you have a small compute-intensive custom ANSI-C program that you want to run multiple instances of on EC2? Can anyone provide pointers on how to do that (or even just buzzwords to search for)?
I don't think your quest is too different from that of a web application. Your stack is different of course, but regardless – the principles remain the same.
As someone commented on your question: Elastic Map Reduce might be what you're looking for the parallelize your work easily, etc.. If that is too limited, you could look into Cloudera. A ready-to-rumble hadoop distribution with support for EC2 as well.
If map-reduce is not to your liking, then you need to setup your own instance. Roughly speaking, the keypoints are as follows:
You want to figure out a way to start EC2 instances.
You want to figure out a way to bootstrap and configure them.
Cluster/network?
Starting EC2 instances
If you don't require something like auto-scaling or a custom interface, the AWS Console does an extremely good job. You have to select an AMI (Amazon Machine Image) suitable for your project. I'd probably look into either the official AMI or something Ubuntu-based (If I remember correctly, Ubuntu is the most used Linux on EC2).
But that is up to you and your liking. (And I don't know enough about your project.)
Once you figured out a setup that works for you, the easiest way to clone your work is to setup your own AMI and start instances with it, etc..
Bootstrapping
Bootstrapping can be using what EC2 calls user-script. It allows you to pass shell script to the instance, which would execute calls to setup your stack, etc.. I'm not sure what is required in this case, etc.. So in case you comment or extend your answer, I could go into detail here.
Cluster/Networking
This is a wild guess since I'm not sure what your code does, or how it works, etc.. If it's not necessary, I'd probably scale this out using a single instance first. You can get a lot of cores and RAM provisioned easily with EC2. Depending if your work requires more RAM or CPU, look into high-cpu and high-memory instance types.
You can start off with a t1.micro, which you can currently get for free even and go from there.
Let me know if this helps!