Algorithm to find the logarithm of a number [duplicate] - algorithm

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to find a binary logarithm very fast? (O(1) at best)
how does the log function work.
How the log of a with base b is calculated.

There are many algorithms for doing this. One of my favorites (WARNING: shameless plug) is this one based on the Fibonacci numbers. The code contains a pretty elaborate comment that goes into how the math works. Given a and ab, it runs in time O(lg b) and O(1) space.
Hope this helps!

Related

Is there an easy way to work out the Big O value for time and space complexity? [duplicate]

This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 3 years ago.
I have been learning about sorting algorithms and understand the idea of time and space complexity. I was wondering if there was a fairly quick and easy way of working out the complexities given the algorithm (one that may even be quick enough to do in an exam) as opposed to learning all of the complexities for the algorithms.
If this isn't an option, is there an easy way to remember or learn them for some of the more basic sortings algorithms such as merge sort and quicksort.
You have to memorize this:
It depends on the problem. Best option in different situations:
In place and stable: Selection Sort
In place (don't care about stable): Heap Sort
Stable (don't care about in place): Merge Sort

Algoithms that uses O(n!)? [duplicate]

This question already has answers here:
Example of O(n!)?
(16 answers)
Closed 5 years ago.
I can't seem to find any examples that uses O(n!) time complexity.
I can't seem to comprehend how it works. Please help
A trivial example is the random sort algorithm. It randomly shuffles its input until it gets it sorted.
Of course, it has strictly no use in the real world, but still, it is O(n!).
EDIT: As pointed out in the comments, this is actually the average time performance of this algorithm. The best-case time complexity is O(1), which happens when the algorithm finds the right permutation right away, and is unbounded in the worst case, since you have no guarantee that the right permutation will come up.

How is O(n log n) different then O(log n)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Researching big O notation, I understand the concept of O(log n) as a binary search and O(n log n) as a quick sort.
Can anyone put into layman's terms what the main difference in runtime is between these two? and why that is the case?
they seem intuitively to be similarly related
Basically: a factor of N.
A binary search only touches a small number of elements. If there's a billion elements, the binary search only touches ~30 of them.
A quicksort touches every single element, a small number of times. If there's a billion elements, the quick sort touches all of them, about 30 times: about 30 billion touches total.
See how Log(n) is flat (not literally but figuratively, in comparison to other functions), while nLog(n) has crossed 600 for a value of n = 100. That's how different they are.
On simple terms and visualization, they are kind of the same in sorting algorithms, but quick sort as O(n log n) has a flaw in some situations, Quick Sort most situations is log n, but on special cases is n², that's why n before log n . So Quick sort for small amount of sorting is very good, but for millions/billions its not, better use Merge Sort for that kind of sorting.

algorithm to find the median value from an array with an odd number of elements [duplicate]

This question already has answers here:
Finding the median of an unsorted array
(9 answers)
Closed 6 years ago.
I would like to know if there exists an algorithm to find the median of an array of odd length. Obviously one could just sort the array and take the middle but ideally by only being interested in the median one could make gains in terms of time complexity of the algorithm.
If no such algorithm exists, any suggestions regarding how to go about developing such an algorithm would be great.
Thanks
This is solved by a selection algorithm, and can be done in O(n) time. Quickselect, or its refinement introselect, are popular methods.
A very brief summary of quickselect is to run quicksort, but rather than sorting both halves at each step, you only sort the half that contains the element you're looking for, which can be determined by counting how many elements are in each partition.
C++, for example, actually has this as a standard library function: nth_element.
You can use the Selection algorithm that can find the kth smallest element of an array with k is the half of the size of the array.
For unstructured data, it's within O(n).
But always keep in mind, that theoretical complexity is not everything!
Read also this question.
Yes, an algorithm exists. The problem you are talking about is finding the kth largest element where k is the value of half+1 of the array length. Here is a link to a way to do it in O(n) time, Median of medians.

calculate res[i+j] = a[i]*b[j] in Nlg(N) [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Fast convolution algorithm
I have two arrays a and b of N length. I want to calculate the result array as
res[i+j] += a[i]*b[j]
Is it possible to calculate this using FFT or something similar in time faster than N^2. I saw this question already 1D Fast Convolution without FFT but am not sure how to do that using FFT.
EG: A=[1,2,3],B[2,4,6]
res[3] = A[1]*B[2]+A[2]*B[1]
Thanks in advance
From what i understand you want the FFT algorithm. here you have an implementation of this algorithm, and also a good explanation on how to implement the FFT algorithm.

Resources