Time complexity in terms of O? [closed] - time

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What is the time complexity when the size of input increases the time diminishes?

That depends on how quickly the time diminishes. For example if doubling the input size halves the runtime, the run time will be in O(1/n).
Basically this isn't any different than the usual case where increasing the input size also increases the run time.
Of course this is strictly theoretical as in practice there can be no algorithm whose runtime keeps decreasing as the input size approaches infinity.

Whatever the limit of the time is as the size approaches infinity; it is an upper bound for "large enough" inputs.

Related

Matlab's blockproc vs for loop Speed [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I need to perform computations for each pixels in an image. I currently have code with a double for-loop that takes a long time. Would converting this to "blockproc(I,[1 1],fun)" give any kind of speed up?
Thanks!
If you have access to the Parallel Computing Toolbox and R2012a or later, you can use blockproc with the 'parallel'-option. Alternatively, you can parfor the outer loop. While it's possible blockproc is still faster, you should definitely profile the two options side-by-side.
Alternatively, you may be able to achieve important speed gains by changing your algorithm - for example, break down a 2D filter into 2 1D filters, etc.
/aside: The big advantage blockproc can have over nested-loop solutions is that it can work on images that are too big to fit in RAM, i.e. it takes care of loading sub-images for you.

Why is it so difficult to program a true random number generator? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I don't understand why a PRNG is easier to program than a true RNG. Shouldn't a typical processor make short work of producing a truly random number?
Computers are deterministic machines, given the same input, code included, they will produce the same result. To get true randomness you need to introduce something random from the real world, like the time or cosmic rays or something else that you can't predict.

Cutting rectangular pieces of a rectangular paper and minimizing the wastage. [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
A rectangular piece of paper is given of W*H(widht * height). One is supposed to cut rectangular pieces out of it. The list (having 'k' elements) of size of the pieces is given. The size of the pieces is given by w*h. All the numbers are integers.
The cut must go from one end to the other.
There could be any number of pieces of the listed sizes(including no piece).
The aim is to use as much paper as possible, i.e minimize wastage.
Can anyone suggest me how to approach this problem.
this is your typical knapsack problem. i will spare you the details here but you can get more info and ideas on how to approach it here
http://en.wikipedia.org/wiki/Knapsack_problem

Hash function for classification [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Given a known set $A$ of distinct numbers $0 ~ 2^(n+1)-1$. In binary mode, it is a n-dimensional vector with 0/1 elements. Now for an arbitrary subset $S$ containing $m$ distinct numbers of $A$, is it possible to find a function $f$, such that $f(S)$ becomes $0,1,...,m-1$, while $f(A\S)$ should not fall in $0,1,...,m-1$. The function $f$ should be as simple as possible, a linear one is preferred. Thanks.
The keyword you're looking for is a minimal perfect hash function, and yes, it's always possible to construct a minimal perfect hash function for a given S.

metrics for algorithms [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Can anyone provide a complete list of metrics for rating an algorithm?
For example, my list starts with:
elegance
readability
computational efficiency
space efficiency
correctness
This list is not in order and my suspicion is that it isn't near complete. Can anyone provide a more complete list?
An exhaustive list may be difficult to put in a concise answer, since some important qualities will only apply to a subset of algorithms, like "levels of security offered by an encryption system for particular key sizes".
In any case, I'm interested to see more additions people might have. Here are a few:
optimal (mathematically proven to be the best)
accuracy / precision (heuristics)
any bounds on best, worst, average-case
any pathological cases? (asymptotes for chosen bad data, or encryption systems which do poorly for particular "weak" keys)
safety margin (encryption systems are breakable given enough time and resources)

Resources