For finding the sum of n numbers, which is the better option for less time complexity?
using a for loop and iterate throughout the n numbers of an array
using the real equation (n*(n-1))/2
I think that the second one is correct, but it will also depends on the length of the array? Which one is correct here ?
Related
I was going over both solutions to Two Sum on leetcode and I noticed that the n^2 solution is to basically test all combinations of two numbers and see if they sum to Target.
I understand the naive solution iterates over each element of the array( or more precisely n-1 times because we can't compare the last element to itself) to grab the first addend and then another loop to grab all of the following elements. This second loop needs to iterate n-1-i times where i is the index of the first addend. I can see that (n-1)(n-1-i) is O(n^2).
The problem comes when I googled "algorithm for finding combinations" and it lead to this thread where the accepted answer talks about Gray Codes which goes way above my head.
Now I'm unsure whether my assumption was correct, the naive solution is a version of a Gray Code, or something else.
If Two Sum is a combinations problem then it's Time Complexity would be O(n!/ ((n-k)! k!)) or O(nCk) but I don't see how that reduces to O(n^2)
I read the Two Sum question and it states that:
Given an array of integers nums and an integer target, return indices
of the two numbers such that they add up to target.
It is a combinations problem. However, on closer inspection you will find that here the value of k is fixed.
You need to find two numbers from a list of given numbers that
add up to a particular target.
Any two numbers from n numbers can be selected in nC2 ways.
nC2 = n!/((n-2)! * 2!)
= n*(n-1)*(n-2)!/((n-2)!*2)
= n*(n-1)/2
= (n^2 - n)/2
Ignoring n and the constant 2 as it will hardly matter when n tends to infinity. The expressions finally results in a complexity of O(n^2).
Hence, a naïve solution of Two Sum has a complexity of O(n^2). Check this article for more information on your question.
https://www.geeksforgeeks.org/given-an-array-a-and-a-number-x-check-for-pair-in-a-with-sum-as-x/
I have a range of numbers from 1-10 and I want to pick out 3 randomly but never the same twice. In lua I used a Fisher-Yates shuffle which is O(n), I know python has a built-in random.sample() also O(n). Can it be done faster with an arbitrary range and number of picks?
It is impossible to operate on any sequence of numbers where you read each one under complexity of O(n) because having n read operations alone puts it into linear complexity.
P.S.: This assumes that n is the number of picks. If you have an array and n is the size of that array and the number of picks m is constant, then you can generate a random index number with any method m times and achieve O(1), assuming array index takes constant time. I hope that answers your question, please clarify, if that didn't solve it.
I have a set with elements. I need to generate all the permutations of those elements.
The time complexity of the algorithm that I'm using is O(n!) and it is recursion based. Naturally every recursive algorithm can be converted to non-recursive using an infinite loop and a stack.
Is it possible to generate all the permutations without using either recursion or the stack + loop equivalence ?
The answer to
Is recursion or stack necessary for factorial time complexity algorithms
in general is no, trivially. Take for example a code that simply iterates through all numbers from 1 to n!:
for i from 1 to factorial(n):
play("ni.mp3")
As for
Is it possible to generate all the permutations without using either recursion or the stack + loop equivalence ?
the answer is yes and you can find the answer here. There are several different vraiations available depending on the order that you'd like them to be generated. Here's an example from the first answer:
You start from the rightmost number and go one position to the left until you see a number that is smaller than its neighbour. Than you place there the number that is next in value, and order all the remaining numbers in increasing order after it. Do this until there is nothing more to do. Put a little thought in it and you can order the numbers in linear time with respect to their number.
As answerd before:
No, recursion and stack are not necessary for factorial time complexity algorithms.
But as I read your post, it looks like you want to ask instead
Is it possible to generate all permutations of n elements faster than O(n!)?
The answer to that is also: no.
Since there are n! different permutations of n elements, every algorithm that saves or displays or does anything with all n! permutations, requires at least O(n!) time. This includes generation of the permutations.
This was inspired by a question at a job interview: how do you efficiently generate N unique random numbers? Their security and distribution/bias don't matter.
I proposed a naive way of calling rand() N times and eliminating dupes by trial and error, thus getting inefficient and flawed solution. Then I've read this SO question, these algorithms are great for getting quality unique numbers and they are O(N).
But I suspect there are ways to get low-quality unique random numbers for dummy tasks in less than O(N) time complexity. I got some possible ideas:
Store many precomputed lists each containing N numbers and retrieve one list randomly. Complexity is O(1) for fixed N. Storage space used is O(NR) where R is number of lists.
Generate N/2 unique random numbers and then divide them by 2 inequal parts (floor/ceil for odd numbers, n+1/n-1 for even). I know this is flawed (duplicates can pop up) and O(N/2) is still O(N). This is more of a food for thought.
Generate one big random number and then squeeze more variants from it by some fixed manipulations like bitwise operations, factorization, recursion, MapReduce or something else.
Use a quasi-random sequence somehow (not a math guy, just googled this term).
Your ideas?
Presumably this routine has some kind of output (i.e. the results are written to an array of some kind). Populating an array (or some other data-structure) of size N is at least an O(N) operation, so you can't do better than O(N).
You can consequently generate a random number, and if the result array contains it, just add to it the maximum number of already generated numbers.
Detecting if a number already generated is O(1) (using a hash set). So it's O(n) and with only N random() calls.
Of course, this is an assumption that we do not overflow the upper limit (i.e. BigInteger).
Given an unsorted integer array, and without making any assumptions on
the numbers in the array:
Is it possible to find two numbers whose
difference is minimum in O(n) time?
Edit: Difference between two numbers a, b is defined as abs(a-b)
Find smallest and largest element in the list. The difference smallest-largest will be minimum.
If you're looking for nonnegative difference, then this is of course at least as hard as checking if the array has two same elements. This is called element uniqueness problem and without any additional assumptions (like limiting size of integers, allowing other operations than comparison) requires >= n log n time. It is the 1-dimensional case of finding the closest pair of points.
I don't think you can to it in O(n). The best I can come up with off the top of my head is to sort them (which is O(n * log n)) and find the minimum difference of adjacent pairs in the sorted list (which adds another O(n)).
I think it is possible. The secret is that you don't actually have to sort the list, you just need to create a tally of which numbers exist. This may count as "making an assumption" from an algorithmic perspective, but not from a practical perspective. We know the ints are bounded by a min and a max.
So, create an array of 2 bit elements, 1 pair for each int from INT_MIN to INT_MAX inclusive, set all of them to 00.
Iterate through the entire list of numbers. For each number in the list, if the corresponding 2 bits are 00 set them to 01. If they're 01 set them to 10. Otherwise ignore. This is obviously O(n).
Next, if any of the 2 bits is set to 10, that is your answer. The minimum distance is 0 because the list contains a repeated number. If not, scan through the list and find the minimum distance. Many people have already pointed out there are simple O(n) algorithms for this.
So O(n) + O(n) = O(n).
Edit: responding to comments.
Interesting points. I think you could achieve the same results without making any assumptions by finding the min/max of the list first and using a sparse array ranging from min to max to hold the data. Takes care of the INT_MIN/MAX assumption, the space complexity and the O(m) time complexity of scanning the array.
The best I can think of is to counting sort the array (possibly combining equal values) and then do the sorted comparisons -- bin sort is O(n + M) (M being the number of distinct values). This has a heavy memory requirement, however. Some form of bucket or radix sort would be intermediate in time and more efficient in space.
Sort the list with radixsort (which is O(n) for integers), then iterate and keep track of the smallest distance so far.
(I assume your integer is a fixed-bit type. If they can hold arbitrarily large mathematical integers, radixsort will be O(n log n) as well.)
It seems to be possible to sort unbounded set of integers in O(n*sqrt(log(log(n))) time. After sorting it is of course trivial to find the minimal difference in linear time.
But I can't think of any algorithm to make it faster than this.
No, not without making assumptions about the numbers/ordering.
It would be possible given a sorted list though.
I think the answer is no and the proof is similar to the proof that you can not sort faster than n lg n: you have to compare all of the elements, i.e create a comparison tree, which implies omega(n lg n) algorithm.
EDIT. OK, if you really want to argue, then the question does not say whether it should be a Turing machine or not. With quantum computers, you can do it in linear time :)