any concrete suggestions for computing application/System reliability ?
For predicted reliability, you multiply the predicted reliabilities of all critical components together (under the assumption that the reliabilities are independent, which is not generally safe). With non-critical components, you've got to work out whether they group together to form a critical system or whether there is some characteristic time which they can be down for before coming critical or … Well, in summary, you've just got to analyze very carefully.
But predicted reliability is not the same as measured reliability! If you're at all serious about this (and generally 99.9999% reliability is very serious stuff) then you're going to have to measure, and you need to work out very carefully what to measure too and from what perspective. There's no point in measuring website availability from within the same cluster if the characteristic problem of the deployment is off-site networking bandwidth.
Related
When you're building something in a web development scenario you're often thinking of costs/resources, and you're often juggling between three resources:
CPU (Processing in general)
Memory (Storage in general)
Network/Bandwidth (Or maybe even external/server resources)
The theorem here is simple, you can only choose two of these to be low.
If you want low CPU and Memory, you'll have to ask the server to do the work (High bandwidth usage)
If you want low Memory and Bandwidth, the CPU will have to do extra work to create and recreate things on the go.
If you want low CPU and Bandwidth, the memory will have to store more information and possibly duplicated data.
My question here, is there a name for this theorem? Or the managing of these 3 resources? I would like to know more about the theory behind choosing the best options in each scenario and researches related to this.
To be honest I don't know if this is the right community for this question, it is mostly a theoretical/academic question.
It doesn't work this way. For all 3 resources you can achieve low usage percentage by simply adding more resources in most cases. If you have high CPU usage then deploy your app to the server with two times more CPUs. So if you have money you can achieve low usage levels. In CAP it doesn't matter how much money you have.
Also from Wikipedia:
CAP is frequently misunderstood as if one has to choose to abandon one of the three guarantees at all times. In fact, the choice is really between consistency and availability only when a network partition or failure happens; at all other times, no trade-off has to be made
A distributed system is described as scalable if it remains effective when there is a significant increase the number of resources and the number of users. However, these systems sometimes face performance bottlenecks. How can these be avoided?
The question is pretty broad, and depends entirely on what the system is doing.
Here are some things I've seen in systems to reduce bottlenecks.
Use caches, reducing network and disk bottlenecks. But remember that knowing when to evict from a cache eviction can a hard problem in some situations.
Use message queues to decouple components in the system. This way you can add more hardware to specific parts of the system that need it.
Delay computation when possible (often by using message queues). This takes the heat off the system during high-processing times.
Of course, design the system for parallel processing wherever possible. One host doing processing is NOT scalable. Note: most relational databases fall into the one-host bucket, this is why NoSQL has become suddenly popular; but not always appropriate (theoretically).
Use eventual consistency if possible. Strong consistency is much harder to scale.
Some are proponents for CQRS and DDD. Though I have never seen or designed a "CQRS system" nor a "DDD system," those have definitely affected the way I design systems.
There is a lot of overlap in the points above; some the techniques may use some of the others.
But, experience (your own and others) eventually teaches you about scalable systems. I keep up-to-date by reading about designs from google, amazon, twitter, facebook, and the like. Another good starting point is the high-scalability blog.
Just to build on a point discussed in the abover post, I would like to add that what you need for your distributed system is a distributed cache, so that when you intend on scaling your application the distributed cache acts like a "elastic" data-fabric meaning that you can increase storage capacity of the cache without compromising on performance and at the same time giving you a relaible platform that is accessible to multiple applications.
One such distributed caching solution is NCache. Do take a look!
A customer is running a clustered web application server under considerable load. He wants to know if the upcoming application, which is not implemented yet, will still be manageable by his current setup.
Is there a established method to predict the performance impact of application in concept state, based on an existing requirement specification (or maybe a functional design specification).
First priority would be to predict the impact on CPU resource.
Is it possible to get fairly exact results at all?
I'd say the canonical answer is no. You always have to benchmark the actual application being deployed on its target architecture.
Why? Software and software development are not predictable. And systems are even more unpredictable.
Even if you know the requirements now and have done deep analysis what happens if:
The program has a performance bug (or two...) - which might even be a bug in a third-party library
New requirements are added or requirements change
The analysis and design don't spot all the hidden inter-relationships between components
There are non-linear effects of adding load and the new load might take the hardware over a critical threshold (a threshold that is not obvious now).
These concerns are not theoretical. If they were, SW development would be trivial and projects would always be delivered on time and to budget.
However there are some heuristics I personally used that you can apply. First you need a really good understanding of the current system:
Break the existing system's functions down into small, medium and large and benchmark those on your hardware
Perform a load test of these individual functions and capture thoughput in transactions/sec, CPU cost, network traffic and disk I/O figures for as many of these transactions as possible, making sure you have representation of small, medium and large. This load test should take the system up to the point where additional load will decrease transactions/sec
Get the figures for the max transactions/sec of the current system
Understand the rate of growth of this application and plan accordingly
Perform the analysis to get an 'average' small, medium and large 'cost' in terms of CPU, RAM, disk and network. This would be of the form:
Small transaction
CPU utilization: 10ms
RAM overhead 5MB (cache)
RAM working: 100kb (eg 10 concurrent threads = 1MB, 100 threads = 10MB)
Disk I/O: 5kb (database)
Network app<->DB: 10kb
Network app<->browser: 40kb
From this analysis you should understand how much headroom you have - CPU certainly, but check that there is sufficient RAM, network and disk capacity. Eg, the CPU required for small transactions is number of small transactions per second multiplied by the CPU cost of a small transaction. Add in the CPU cost of medium transactions and large ones, and you have your CPU budget.
Make sure the DBAs are involved. They need to do the same on the DB.
Now you need to analyse your upcoming application:
Assign each features into the same small, medium and large buckets, ensuring a like-for-like matching as far as possible
Ask deep, probing questions about how many transactions/sec each feature will experience at peak
Talk about the expected rate of growth of the application
Don't forget that the system may slow as the size of the database increases
On a personal note, you are being asked to predict the unpredictable - putting your name and reputation on the line. If you say it can fit, you are owning the risk for a large software development project. If you are being pressured to say yes, you need to ensure that there are many other people's names involved along with yours - and those names should all be visible on the go/no-go decision. Not only is this more likely to ensure that all factors are considered, and that the analysis is sound, but it will also ensure that the project has many involved individuals personally aligned to its success.
How much is the performance of modern FPGA relative to CPU, absolutly in (GFlops/GIops) and what is the cost of one billion integer operations per second on the FPGA?
And in which tasks now beneficial to use FPGA?
I only found it:
http://www.hpcwire.com/hpcwire/2010-11-22/the_expanding_floating-point_performance_gap_between_fpgas_and_microprocessors.html
And an old article:
http://www.mouldy.org/fpgas-in-cryptanalysis.pdf
Disclaimer: I work for SRC Computers, a heterogeneous CPU/FPGA system manufacturer.
"It depends", of course, is the answer.
A microprocessor is a fixed set of functional units. These perform reasonably well across a broad range of applications.
An FPGA is programmed by a designer with a specific set of functional units designed solely to execute a specific application. As such, it (often) performs very well for a given application.
"How much is the performance of modern FPGA relative to CPU, absolutely in (GFlops/GIops)" becomes a meaningless question. It can be answered for the the microprocessor as it has a fixed set of floating point units. However, for an FPGA, the question evolves into 1) how large if the FPGA, 2) how many floating point units can I pack into it and still do useful work, what is the memory/support architecture around the FPGA and 4) what are the sustained system bandwidths between the FPGA, its memory and the rest of the system?
The answer to "what is the cost of one billion integer operations per second on the FPGA" is similarly addressed by the preceding paragraph.
An interesting thing to keep in mind around performance is that in an FPGA, peak performance equals sustained performance since the FPGA is dedicated to executing a given application. As long as other system parameters do not interfere, of course.
Your question "And in which tasks now beneficial to use FPGA?" is a very broad question and grows with every large FPGA device release. In extremely broad non-exclusive terms, parallel and streaming applications benefit, although the application performance is to a large extent determined by the system architecture.
Assume an embedded environment which has either a DSP core(any other processor core).
If i have a code for some application/functionality which is optimized to be one of the best from point of view of Cycles consumed(MCPS) , will it also be a code, best from the point of view of Power consumed by that code in a real hardware system?
Can a code optimized for least MCPS be guaranteed to have least power consumption as well?
I know there are many aspects to be considered here like the architecture of the underlying processor and the hardware system(memory, bus, etc..).
Very difficult to tell without putting a sensitive ammeter between your board and power supply and logging the current drawn. My approach is to test assumptions for various real world scenarios rather than go with the supporting documentation.
No, lowest cycle count will not guarantee lowest power consumption.
It's a good indication, but you didn't take into account that memory bus activity consumes quite a lot of power as well.
Your code may for example have a higher cycle count but lower power consumption if you move often needed data into internal memory (on chip ram). That won't increase the cycle-count of your algorithms but moving the data in- and out the internal memory increases cycle-count.
If your system has a cache as well as internal memory, optimize for best cache utilization as well.
This isn't a direct answer, but I thought this paper (from this answer) was interesting: Real-Time Task Scheduling for Energy-Aware Embedded Systems.
As I understand it, it trying to run each task under the processor's low power state, unless it can't meet the deadline without high power. So in a scheme like that, more time efficient code (less cycles) should allow the processor to spend more time throttled back.