What is the dIfference between a distributed system and a clustered system? - cluster-computing

Both are defined to be a set of computers that work together and give the end users a perception of a single computer running behind it.
So what is the difference here?

What is the difference between a car and a sports car?
A cluster is a system, usually managed by a single company. Clusters have normally a very low latency and consist of server hardware. A distributed system can be anything. Having JS on the client and PHP-server code which makes up together a system is already called a distributed system by some people.
In general when working with distributed systems you work a lot with long latencies and unexpected failures (like mentioned in p2p systems). When building a cluster (or a big cluster which can be called supercomputer) you try to prevent it by using more robust hardware and better network interconnection (InfiniBand). But nevertheless, a cluster is still a distributed system. (A sports car still has 4 wheels and an engine)

Related

Difference between BOINC and Hadoop/Spark/etc

What's the difference between BOINC https://en.wikipedia.org/wiki/Berkeley_Open_Infrastructure_for_Network_Computing
vs. General Hadoop/Spark/etc. big data framework? They all seem to be distributed computing frameworks - are there places where I can read about the differences or BOINC in particular?
Seems the Large Hadron Collider in EU is using BOINC, why not Hadoop?
Thanks.
BOINC is software that can use the unused CPU and GPU cycles on a computer to do scientific computing
BOINC is strictly a single application that enables grid computing using unused computation cycles.
Apache Hadoop is an open-source software framework for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware.
All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common and should be automatically handled by the framework.
The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part called MapReduce.
(emphasis added to framework and it's dual functionality)
Here, you see Hadoop is a framework (also referred to as an ecosystem) that has both storage and computing capabilities. Hadoop vendors such as Cloudera and Hortonworks bundle in additional functionality into that (Hive, Hbase, Pig, Spark, etc) as well as a few security/auditing tools.
Additionally, hardware failure is handled differently by these two clusters. If a BOINC node dies, there is no fault tolerance; those resources are lost. In the case of Hadoop, data is replicated and tasks are re-ran a certain number of times before eventually failing, but these steps are traceable as long as the logging services built into the framework are running.
Seems the Large Hadron Collider in EU is using BOINC, why not Hadoop?
Because BOINC provides a software that anyone in the world can install to join the cluster, they gain a large range of computing power from anywhere practically for free.
They might be using Hadoop internally to do some storage and perhaps Spark to do additional computing, but buying commodity hardware in bulk and building/maintaining that cluster seems cost prohibitive.
What is similar between BOINC and Hadoop is that they exploit that a big problem can be solved in many parts. And both are most associated with distributing data across many computers, not an application.
The difference is the degree of synchronisation between all contributing machines. With Hadoop the synchronisation is very tight and you expect at some point all data to be collected from all machines to then come up with the final analysis. You literally wait for the last one and nothing is returned until that last fraction of the job was completed.
With BOINC, there is no synchronicity at all. You have many thousands of jobs to be run. The BOINC server side run by the project maintainers orchestrates the delivery of jobs to run to the BOINC client sides run by volunteers.
With BOINC, the project maintainers have no control over the clients at all. If a client is not returning a result then the work unit is sent elsewhere again. With Hadoop, the whole cluster is accessible to the project maintainer. With BOINC, the application is provided across different platforms since it is completely uncertain what platform the user offers. With Hadoop everything is well-defined and typically very homogeneous. BOINC's largest projects have many tens of thousands of regular volunteers, Hadoop has what you can afford to buy or rent.

What are the minimum machine specifications necessary for Admin and Container processes?

The reference material simply states that JDK7 is required for Spring XD.
What are the minimum requirements (RAM, CPU, Disk) for hosts meant to run Spring XD Admin?
What are the minimum requirements (RAM, CPU, Disk) for hosts meant to run Spring XD Containers?
The answer in both cases is it depends what you need to use them for. It seems like Spring XD is designed for high throughput computing(HTC), so unlike traditional high performance computing the addition of GPUs or coprocessors in this case would probably not be particularly beneficial. If you just want to try it out and happen to have several servers laying around it seems like as long as you have something that is powerful enough to run an OS that supports Java you could probably at least make it work. If you are in the initial stages of testing Spring XD to see if it will integrate with your existing infrastructure this would allow you to at least try it out. If you have passed that stage of testing and are confident that Spring XD will work and would like to purchase hardware to optimize its performance feel free to continue reading.
I have not used Spring XD before, but based on the documentation I have been reading and some experiences with HTC there are a few considerations for setting up systems to run it. if you take a look at the diagram from the docs and read a little bit about the services it seems like the Admin, Zookeeper, Analytics Repo and Batch Job DB could be hosted on virtual machines(VMs) under the hypervisor of your choice.
Using a setup with several of the subsystems required to use the distributed model running on VMs would give you the ability to scale resources as necessary, e.g. to begin a single hypervisor system may be sufficient to run everything but as traffic/use grows it may be desirable to separate the VMs onto multiple hypervisors and give some of the VMs additional resources.
With the containers it seems like many other virtualization or containerization schemes for HTC, where more powerful systems e.g. lots of RAM, SSD storage, allow users to run more containers on a single physical box.
To adequately assess the needs for a new system running any application it is important to understand what the limiting factor on the problem is; is it memory bound, IO bound or CPU bound? For large scale parallel applications there are a variety of tools for profiling code and determining where bottlenecks occur. TAU is a common profiling utility in HPC and there are several proprietary offerings available as well.
Once the limitations of the program are clear specing out a system with hardware to reduce/minimize the issue is a lot easier, and normally less expensive. Hopefully this information is helpful.
Additions based on comments:
It seems like it would run with 128k of memory if you have an OS that will boot and run java and any other requirements. If there is backend storage setups somewhere, like a standalone DB server which can be used for the databases as described in the DB Config section of the guide it seems like only a small amount of storage would be necessary.
Depending on how you deploy the images for the Admin OS that may not even be necessary as you could use KIWI to create and deploy a custom OS image of your choosing with configuration files and other customizations embedded in the image. This image could be loaded via the network over PXE or to one of the other output formats KIWI supports like VMs, bootable USB and more.
The exact configuration of the systems running Spring XD will depend on the end goals, available infrastructure and a number of other things. It seems like the Spring XD Admin node could be run on most infrastructure servers. Factors such as reliability, stability and desired performance must also be considered when choosing hardware.
Q: Will Spring XD Admin run on a system with RaspberryPi like specs?
A: based on documentation, yes
Q: Will it run with good performance or reliably on such a system?
A: Probably not if being used for extended periods of time or for large amounts of traffic.

Scaling Tigase XMPP server on Amazon EC2

Does anyone have an experience running clustered Tigase XMPP servers on Amazon's EC2, primarily I wish to know about anything that might trip me up that is non-obvious. (For example apparently running Ejabberd on EC2 can cause issues due to Mnesia.)
Or if you have any general advice to installing and running Tigase on Ubuntu.
Extra information:
The system I’m developing uses XMPP just to communicate (in near real-time) between a mobile app and the server(s).
The number of users will initially be small, but hopefully will grow. This is why the system needs to be scalable. Presumably for a just a few thousand users you wouldn’t need a cc1.4xlarge EC2 instance? (Otherwise this is going to be very expensive to run!)
I plan on using a MySQL database hosted in Amazon RDS for the XMPP server database.
I also plan on creating an external XMPP component written in Python, using SleekXMPP. It will be this external component that does all the ‘work’ of the server, as the application I’m making is quite different from instant messaging. For this part I have not worked out how to connect an external XMPP component written in Python to a Tigase server. The documentation seems to suggest that components are written specifically for Tigase - and not for a general XMPP server, using XEP-0114: Jabber Component Protocol, as I expected.
With this extra information, if you can think of anything else I should know about I’d be glad to know.
Thank you :)
I have lots of experience. I think there is a load of non-obvious problems. Like the only reliable instance to run application like Tigase is cc1.4xlarge. Others cause problems with CPU availability and this is just a lottery whether you are lucky enough to run your service on a server which is not busy with others people work.
Also you need an instance with the highest possible I/O to make sure it can cope with network traffic. The high I/O applies especially to database instance.
Not sure if this is obvious or not, but there is this problem with hostnames on EC2, every time you start instance the hostname changes and IP address changes. Tigase cluster is quite sensitive to hostnames. There is a way to force/change the hostname for the instance, so this might be a way around the problem.
Of course I am talking about a cluster for millions of online users and really high traffic 100k XMPP packets per second or more. Generally for large installation it is way cheaper and more efficient to have a dedicated servers.
Generally Tigase runs very well on Amazon EC2 but you really need the latest SVN code as it has lots of optimizations added especially after tests on the cloud. If you provide some more details about your service I may have some more suggestions.
More comments:
If it comes to costs, a dedicated server is always cheaper option for constantly running service. Unless you plan to switch servers on/off on hourly basis I would recommend going for some dedicated service. Costs are lower and performance is way more predictable.
However, if you really want/need to stick to Amazon EC2 let me give you some concrete numbers, below is a list of instances and how many online users the cluster was able to reliably handle:
5*cc1.4xlarge - 1mln 700k online users
1*c1.xlarge - 118k online users
2*c1.xlarge - 127k online users
2*m2.4xlarge (with 5GB RAM for Tigase) - 236k online users
2*m2.4xlarge (with 20GB RAM for Tigase) - 315k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 400k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 312k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 327k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 280k online users
A few more comments:
Why amount of memory matters that much? This is because CPU power is very unreliable and inconsistent on all but cc1.4xlarge instances. You have 8 virtual CPUs but if you look at the top command you often see one CPU is working and the rest is not. This insufficient CPU power leads to internal queues grow in the Tigase. When the CPU power is back Tigase can process waiting packets. The more memory Tigase has the more packets can be queued and it better handles CPU deficiencies.
Why there is 5*m2.4xlarge 4 times? This is because I repeated tests many times at different days and time of the day. As you can see depending on the time and date the system could handle different load. I guess this is because Tigase instance shared CPU power with some other services. If they were busy Tigase suffered from CPU under power.
That said I think with installation of up to 10k online users you should be fine. However, other factors like roster size greatly matter as they affect traffic, and load. Also if you have other elements which generate a significant traffic this will put load on your system.
In any case, without some tests it is impossible to tell how really your system behaves or whether it can handle the load.
And the last question regarding component:
Of course Tigase does support XEP-0114 and XEP-0225 for connecting external components. So this should not be a problem with components written in different languages. On the other hand I recommend using Tigase's API for writing component. They can be deployed either as internal Tigase components or as external components and this is transparent for the developer, you do not have to worry about this at development time. This is part of the API and framework.
Also, you can use all the goods from Tigase framework, scripting capabilities, monitoring, statistics, much easier development as you can easily deploy your code as internal component for tests.
You really do not have to worry about any XMPP specific stuff, you just fill body of processPacket(...) method and that's it.
There should be enough online documentation for all of this on the Tigase website.
Also, I would suggest reading about Python support for multi-threading and how it behaves under a very high load. It used to be not so great.

Is on-demand elasticity the only major feature of cloud computing that cannot be easily found with traditional hosting?

I am trying to compare cloud computing (on EC2) against traditional hosting on the following grounds to determine whether any of these features present unique benefits in the world of cloud computing versus more traditional hosting strategies:
Real-time monitoring
Server virtualization
Deployment automation
High performance computing
On-demand elasticity
As far as I can see, (1) monitoring is just as easy in both areas; (2) server virtualization is also present in both areas thanks to server farms which allow traditional hosts to beef up resources at will - and of course the same applies in the cloud; (3) deployment can be equally automated in both areas since the same tools often can be applied to both; (4) in the area of high performance computing maybe you get an extra boost from the cloud theoretically but I'm not so sure - you have to pay for that boost whether it's the cloud or not; (5) elasticity is the only real benefit that i can see of moving to the cloud - resources can be pumped up at the flick of a switch.
So my question is, is this really the only benefit of cloud computing from this list that offers a real benefit over traditional hosting or is my analysis flawed?
The main difference here is the cost model. While it's true you can gain all of the same benefits from your list with both Cloud Computing and traditional hosting, you pay up front for traditional hosting. You have to buy and maintain your own servers, while cloud computing allows you to pay a variable cost.
This is the reason cloud computing is so attractive for startup companies.
Not only do you have elasticity, but you have, in theory at least, a greater total amount of resources available than you could have with any static hosting solution.
Also, a side effect of elasticity is decreased electricity usage, which may or may not be a factor for you.
The company I work for is getting ready to move from self-hosting to a cloud provider (EC2). One thing I am greatly looking forward to is not having to worry about managing hardware. I don't need to worry about lead time for ordering parts. The need to have spare parts on-hand to cover unexpected hardware failures is gone. I don't need to worry about UPS or any power. We aren't big enough for cooling to be a concern... but now we never will have to worry about that either.
Depending on your own datacenter costs, a cloud computing platform can be much cheaper, as you don't need anybody to manage physical devices. Cloud services can provide bulk computing resources at likely a lower cost than you can provide if you bought the machines and hooked them up yourself.
Assuming your "traditional hosting" involves a single server, there is a very real benefit to high-performance computing in cloud / grid environments. Specifically, virtually unlimited performance, since you can have n cores working at the same time, whereas with a single server, you are limited by the maximum server capacity.
To put it more clearly, if the most powerful computer in the world is a 1000 - core system with 20 terabytes of RAM, then that's the most power you could have on a hosted server. However, a cloud consisting of 100 of these machines could do 100x the work in almost the same amount of time.
Additionally, it's generally less expensive (financially) to distribute work across multiple smaller machines than it is to get one powerful system capable of doing the same work.
And if you'd like to talk about disaster recovery....clouds can be geographically distributed, meaning if a tornado rips your data center out of the ground, plucks the server into little shards of metal and plastic, and embeds them in telephone poles...you experience a slight dip in your performance because your other 99 servers are still operating.
Elasticity of the computing, storage and network capacities is just a feature. Yet, it brings a huge number of economical benefits for the companies. For example, by implementing a Cloud Bursting scenario a small SaaS company could easily and cheaply handle traffic and usage spikes that might take an expensive hosted solution down.
Elasticity is only useful if you have a problem that can be solved horizontally. For example a web server to serve a static site, if the load increases, add more web servers to server the exact same content. On the other hand, even a simple blog site breaks under that scenario as comments entered into one server's database are not reflected in the other machines.
The resources to scale is not the same thing as the ability to scale. Cloud computing will not solve scalability issues with your application.
A good example of this is a video hosting site: using AWS to deliver the videos results in a disappointing experience since the EC2 cannot deliver the I/Ops necessary to deliver video. Throwing more machines at the problem won't solve the issue with how data gets from disk to network. (Yes I'm aware of the ridiculously expensive high-iops instances)

Is it possible to rent CPU cycles?

I have an application that takes days to process data. Is there a service that would let me run my application on powerful computers?
I'm not running a website or a web service. This is taking lots and lots of data files, running them through a big custom application, and outputting a result.
It takes days on my PC and it's something that needs to be done every once in a while, but not continuously.
Cost isn't really an issue, in the sense that my company will pay for it, but of course it should be cheaper than buying a big-ass machine ourselves.
Have you considered Amazon EC2? You pay by the hour for what you use. No more, no less. You could event rent many servers at once to split the work load.
I'm not sure if that meets your requirement of "powerful computers", because they're just average servers, but at least it will give you a pay-as-you-go solution for running the program off of your own computer.
Amazon's EC2 Service is an excellent solution for your needs. You only pay for the time you use, and you can scale up to as many machines as you need.
From their information:
Elastic – Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously. Of course, because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs.
Flexible – You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, and instance storage that is optimal for your choice of operating system and application. For example, your choice of operating systems includes numerous Linux distributions, Microsoft Windows Server and OpenSolaris.
If your application is not parallel, you won't get many advantages by running it in a "big machine", unless the bottleneck is in the virtual memory swapping. Even the Top500 supercomputers are not essentially faster than any PC for sequential workloads.
If your application can exploit parallelism maybe you could use your company's existent resources more efficiently than just deploying it in one and only pc. If you have a few dozens of computers, you could set up a loosely coupled heterogeneous cluster (or local grid, terminology changes with fashion).
I recommend CPUsage.
It is a "startup" in grid computing.
It's speciality is that any individual can join to the grid with spare cpu cycles. That makes the grid management cheap, thus the grid usage prices are also very cheap.
They have an API which if you integrate into your program, it will be able to run on the system.

Resources