Welch Bounds for correlations - correlation

I am working on correlations and trying to find optimized codes for transmission. In this regard, Welch lays down a bound for correlations, based on number of sequences and the length of these communication sequences.
The proof is vaguely given in his publication of 1974; can anyone refer to me to a simpler tutorial with simpler mathematics?

The Welch Bound proof on Wikipedia is the most basic explanation. There are alternate proofs in various literature, as well.

Related

Extended Euclidean Algorithm runs in time O(log(m)^2)

I'm interested in justification of the following line from the wikipedia article:
"This algorithm [extended euclidean algorithm] runs in time O(log(m)^2), assuming |a| < m, and is generally more efficient than exponentiation."
http://en.wikipedia.org/wiki/Modular_multiplicative_inverse
Why is this so? Can anyone explain this to me? I understand completely the algorithm and all the maths, it is just that i do not see how to determine the complexity of such algorithms. Any more general hints?
Also, additionally: Is log meant to be the natural logarithm (ln) or the one with base 2?
The popular Introduction to algorithms book (http://mitpress.mit.edu/books/introduction-algorithms) has got a whole chapter on proving algorithms complexity (however there's much more to the topic than in this book). You can read it if your generally interested in this matter.
You might also try to follow this paper's references: http://itee.uq.edu.au/~havas/cats03.pdf

Effectiveness vs efficiency of algorithms

Please can someone tell me what the effectiveness of an algorithm relates to? I understand what the efficiency component entails
thanks
Effectiveness relates to the ability to produce the desired result.
Some tasks inherently do not have strict definitions - for example, machine translation between two human languages. Different algorithms exist to translate, say, from English to Spanish; their effectiveness is a measure of how good are the results that these algorithms produce. Their efficiency , on the other hand, measure how fast they are at producing the results, how much memory they use, how much disk space they need, etc.
This question suggests that you have read something which refers to the effectiveness of algorithms and have not understood the author's explanation of the term -- if the author has provided one. I don't think that there is a generally accepted interpretation of the term, I think it is one of those terms which falls under the Humpty-Dumpty rule 'a word means what I say it means'.
It might refer to an aspect of some algorithms which return only approximate solutions to problems. For example, we all know that the travelling salesman problem has NP time complexity, an actual algorithm which 'solves' the TSP might provide some bounds on the difference between the solutions it can find and an optimal solution which might take too long to find.

Minimal addition-chain exponentiation

I know it has been proven NP-complete, and that's ok. I'm currently solving it with branch and bound where I set the initial upper limit at the number of multiplications it would take the normal binary square/multiply algorithm, and it does give the right answers, but I'm not satisfied with the running time (it can take several seconds for numbers around 200). This being an NP-complete problem, I'm not expecting anything spectacular; but there are often tricks to get the Actual Time under control somewhat.
Are there faster ways to do this in practice? If so, what are they?
This looks like section 4.6.3 "Evaluation of Powers" in Knuth Vol 2 Seminumerical Algorithms. This goes into considerable detail to give various approaches, which look much quicker than branch and bound but do not all provide the absolutely best solution.
Knuth states in the discussion after Theorem F that he uses backtrack search to prove that l(191) = 11, so I doubt if you will find a short-cut answer for this. He defers explanation of the backtrack search to section 7.2.2, which is I think still unpublished, although there are traces of work on this at http://www-cs-faculty.stanford.edu/~uno/programs.html.
Metaheuristics algorithms will scale far better. They include Tabu search, Genetic algorithms, Simulated Annealing, ...
There's a couple of free books and free software out there.
I'm late to the party but in Handbook of Elliptic and Hyperelliptic Curve Cryptography there is a chapter "9.2 Fixed exponent" which also discusses various kinds addition chains.

Recursion Tree, Solving Recurrence Equations

As far as I know There are 4 ways to solve recurrence equations :
1- Recursion trees
2- Substitution
3 - Iteration
4 - Derivative
We are asked to use Substitution, which we will need to guess a formula for output. I read from CLRS book that there is no magic to do this, i was curious if there are any heuristics to do this?
I can certainly have an idea by drawing a recurrence tree or using iteration but, because the output will be in Big-OH or Theta format, formulas doesnt necessarily match.
Does any one have any recommendation for solving recurrence equations using substitution?
Please note that the list of possible ways to solve recurrence equations is definitely not complete, its merely a set of tools they teach Computer Scientists, because they will most likely solve most of your problems.
For exact solutions of recurrence equations mathematicians use a tool called generating functions. Generating functions give you exact solutions, and in general are more powerful than the master theorem.
There is a great resource online to learn about the here. http://www.math.upenn.edu/~wilf/DownldGF.html
If you go through the first couple examples you should get the hang of it in no time.
You need some math background and understand rudimentary taylor series. http://en.wikipedia.org/wiki/Taylor_series
Generating functions are also extremely useful in probability.
For simple ones, just take a "reasonable" guess.
For more complicated ones, I would go ahead and use a recurrence tree — it seems to me to be the easiest "algorithm" for generating a guess. Note that it can be difficult to use a recurrence tree to prove a bound (the details are tough to get right). Recurrence trees are highly useful for forming guesses which are then proven by substitution.
I'm not sure why you're saying the formulas won't match with the output in Big-O or Theta. They typically don't match exactly, but that's part of the point of Big-O. Part of the trick of going back to substitution is knowing how to plug in the Big-O solution to to make the substitution algebra work out. IIRC, CLRS does work out an example or two of this.

How is linear algebra used in algorithms?

Several of my peers have mentioned that "linear algebra" is very important when studying algorithms. I've studied a variety of algorithms and taken a few linear algebra courses and I don't see the connection. So how is linear algebra used in algorithms?
For example what interesting things can one with a connectivity matrix for a graph?
Three concrete examples:
Linear algebra is the fundament of modern 3d graphics. This is essentially the same thing that you've learned in school. The data is kept in a 3d space that is projected in a 2d surface, which is what you see on your screen.
Most search engines are based on linear algebra. The idea is to represent each document as a vector in a hyper space and see how the vector relates to each other in this space. This is used by the lucene project, amongst others. See VSM.
Some modern compression algorithms such as the one used by the ogg vorbis format is based on linear algebra, or more specifically a method called Vector Quantization.
Basically it comes down to the fact that linear algebra is a very powerful method when dealing with multiple variables, and there's enormous benefits for using this as a theoretical foundation when designing algorithms. In many cases this foundation isn't as appearent as you might think, but that doesn't mean that it isn't there. It's quite possible that you've already implemented algorithms which would have been incredibly hard to derive without linalg.
A cryptographer would probably tell you that a grasp of number theory is very important when studying algorithms. And he'd be right--for his particular field. Statistics has its uses too--skip lists, hash tables, etc. The usefulness of graph theory is even more obvious.
There's no inherent link between linear algebra and algorithms; there's an inherent link between mathematics and algorithms.
Linear algebra is a field with many applications, and the algorithms that draw on it therefore have many applications as well. You've not wasted your time studying it.
Ha, I can't resist putting this here (even though the other answers are good):
The $25 billion dollar eigenvector.
I'm not going to lie... I never even read the whole thing... maybe I will now :-).
I don't know if I'd phrase it as 'linear algebra is very important when studying algorithms". I'd almost put it the other way around. Many, many, many, real world problems end up requiring you to solve a set of linear equations. If you end up having to tackle one of those problems you are going to need to know about some of the many algorithms for dealing with linear equations. Many of those algorithms were developed when computers was a job title, not a machine. Consider gaussian elimination and the various matrix decomposition algorithms for example. There is a lot of very sophisticated theory on how to solve those problems for very large matrices for example.
Most common methods in machine learning end up having an optimization step which requires solving a set of simultaneous equations. If you don't know linear algebra you'll be completely lost.
Many signal processing algorithms are based on matrix operations, e.g. Fourier transform, Laplace transform, ...
Optimization problems can often be reduced to solving linear equation systems.
Linear algebra is also important in many algorithms in computer algebra, as you might have guessed. For example, if you can reduce a problem to saying that a polynomial is zero, where the coefficients of the polynomial are linear in the variables x1, …, xn, then you can solve for what values of x1, …, xn make the polynomial equal to 0 by equating the coefficient of each x^n term to 0 and solving the linear system. This is called the method of undetermined coefficients, and is used for example in computing partial fraction decompositions or in integrating rational functions.
For the graph theory, the coolest thing about an adjacency matrix is that if you take the nth power of an adjacency Matrix for an unweighted graph (each entry is either 0 or 1), M^n, then each entry i,j will be the number of paths from vertex i to vertex j of length n. And if that isn't just cool, then I don't know what is.
All of the answers here are good examples of linear algebra in algorithms.
As a meta answer, I will add that you might be using linear algebra in your algorithms without knowing it. Compilers that optimize with SSE(2) typically vectorize your code by having many data values manipulated in parallel. This is essentially elemental LA.
It depends what type of "algorithms".
Some examples:
Machine-Learning/Statistics algorithms: Linear Regressions (least-squares, ridge, lasso).
Lossy compression of signals and other processing (face recognition, etc). See Eigenfaces
For example what interesting things can one with a connectivity matrix for a graph?
A lot of algebraic properties of the matrix are invariant under permutations of vertices (for example abs(determinant)), so if two graphs are isomorphic, their values will be equal.
This is a source for good heuristics for determining whether two graphs
are not isomorphic, since of course equality does not guarantee existance of isomorphism.
Check algebraic graph theory for a lot of other interesting techniques.

Resources