What is complexity measured against? (bits, number of elements, ...) - algorithm

I've read that the naive approach to testing primality has exponential complexity because you judge the algorithm by the size of its input. Mysteriously, people insist that when discussing primality of an integer, the appropriate measure of the size of the input is the number of bits (not n, the integer itself).
However, when discussing an algorithm like Floyd's, the complexity is often stated in terms of the number of nodes without regard to the number of bits required to store those nodes.
I'm not trying to make an argument here. I honestly don't understand the reasoning. Please explain. Thanks.

Traditionally speaking, the complexity is measured against the size of input.
In case of numbers, the size of input is log of this number (because it is a binary representation of it), in case of graphs, all edges and vertices must be represented somehow in the input, so the size of the input is linear in |V| and |E|.
For example, naive primality test that runs in linear time of the number itself, is called pseudo-polynomial. It is polynomial in the number, but it is NOT polynomial in the size of the input, which is log(n), and it is in fact exponential in the size of the input.
As a side note, it does not matter if you use the size of the input in bits, bytes, or any other CONSTANT factor for this matter, because it will be discarded anyway later on when computing the asymptotical notation as constants.

The main difference is that when discussing algorithms we keep in the back of our mind a hardware that is able to perform operations on the data used in O(1) time. When being strict or when considering data which is not able to fit into the processors register then taking the number of bits in account becomes important.

Although the size of input is measured in the number of bits, in many cases we can use a shortcut that lets us divide out a constant number of bits. This constant factor is embedded in the representation that we choose for our data structure.
When discussing graph algorithms, we assume that each vertex and each edge has a fixed cost of representation in terms of the number of bits, which does not depend of the number of vertices and edges. This assumption requires that weights associated with vertices and edges have fixed size in terms of the number of bits (i.e. all integers, all floats, etc.)
With this assumption in place, adjacency list representation has fixed size per edge or vertex, because we need one pointer per edge and one pointer per vertex, in addition to the weights, which we presume to be of constant size as well.
Same goes for adjacency matrix representation, because we need W(E2 + V) bits for the matrix, where W is the number of bits required to store the weight.
In rare situations when weights themselves are dependent on the number of vertices or edges the assumption of fixed weight no longer holds, so we must go back to counting the number of bits.

Related

Can an integer which must hold the value of n contribute to space complexity?

If an algorithm requires an integer which can contain the number n (e.g. counting the size of an input array), that integer must take on the order of log(n) space (right?).
If this is the only space which scales with n, is the space complexity of the algorithm O(logn)?
Formally, it depends on the model of computation you're using. In the classical random access machine model, with a fixed word size, simply storing the length n of the input indeed requires O(log n) space (and simple arithmetic on such numbers takes O(log n) time). On the other hand, in the transdichotomous model, where the word size is assumed to grow logarithmically with the input size n, it only requires O(1) space (and time). Other models may yield yet other answers.
In practice, most analysis of algorithms assumes that simple arithmetic on moderate-size integers (i.e. proportional to the input length) can be done in constant time and space. A practical reason for this, besides the fact that it simplifies analysis, is that in practice the logarithm of the input length cannot grow very large — even a computer capable of counting up from zero to, say, 2256, much less of reading that many bits of input, is probably forever beyond the means of humankind to build using known physics. Thus, for any conceivable realistic inputs, you can simply assume that a machine with a 256-bit word size can store the length of the input in a single word (and that machines with a smaller word size still need only a small constant number of words).
Here n is bounded i.e. n will be 32 bit signed integer as array size has certain limitation. So log(32) is bounded and its O(1)

Hamming numbers for O(N) speed and O(1) memory

Disclaimer: there are many questions about it, but I didn't find any with requirement of constant memory.
Hamming numbers is a numbers 2^i*3^j*5^k, where i, j, k are natural numbers.
Is there a possibility to generate Nth Hamming number with O(N) time and O(1) (constant) memory? Under generate I mean exactly the generator, i.e. you can only output the result and not read the previously generated numbers (in that case memory will be not constant). But you can save some constant number of them.
I see only best algorithm with constant memory is not better than O(N log N), for example, based on priority queue. But is there mathematical proof that it is impossible to construct an algorithm in O(N) time?
First thing to consider here is the direct slice enumeration algorithm which can be seen e.g. in this SO answer, enumerating the triples (k,j,i) in the vicinity of a given logarithm value (base 2) of a sequence member so that target - delta < k*log2_5 + j*log2_3 + i < target + delta, progressively calculating the cumulative logarithm while picking the j and k so that i is directly known.
It is thus an N2/3-time algo producing N2/3-wide slices of the sequence at a time (with k*log2_5 + j*log2_3 + i close to the target value, so these triples form the crust of the tetrahedron filled with the Hamming sequence triples 1), meaning O(1) time per produced number, thus producing N sequence members in O(N) amortized time and O(N2/3)-space. That's no improvement over the baseline Dijkstra's algorithm 2  with the same complexities, even non-amortized and with better constant factors.
To make it O(1)-space, the crust width will need to be narrowed as we progress along the sequence. But the narrower the crust, the more and more misses will there be when enumerating its triples -- and this is pretty much the proof you asked for. The constant slice size means O(N2/3) work per the O(1) slice, for an overall O(N5/3) amortized time, O(1) space algorithm.
These are the two end points on this spectrum: from N1-time, N2/3-space to N0 space, N5/3-time, amortized.
1 Here's the image from Wikipedia, with logarithmic vertical scale:
This essentially is a tetrahedron of Hamming sequence triples (i,j,k) stretched in space as (i*log2, j*log3, k*log5), seen from the side. The image is a bit askew, if it's to be true 3D picture.
edit: 2 It seems I forgot that the slices have to be sorted, as they are produced out of order by the j,k-enumerations. This changes the best complexity for producing the sequence's N numbers in order via the slice algorithm to O(N2/3 log N) time, O(N2/3) space and makes Dijkstra's algorithm a winner there. It doesn't change the top bound of O(N5/3) time though, for the O(1) slices.

Hashing of a Bitstring to Sort by Similarities

Problem description:
We have a lot of bitstrings of the same size. The number and the size of bitstrings is huge. E.g.: 10100101 and 00001111.
Now there is a distance function that just counts the number of same bit positions. In this example: distance is 2 - because the third last and last bits are set by both bitstrings.
-> Now we can make a tour through the bitstrings with the maximal distance, because every bitstring can be converted to a vertex which is connected to all other bitstrings (by the distance function).
Goal
However, this has the complexity of O(N²). My idea is to use a hashing function, that preserves the similarities and than do a simple sort on the hash values. This should result in a near maximum tour. Of course it is not the best result, but it should be a somewhat good result.
Current Problem
My own hashing function rates the left bits higher than the left bits. So that they have a more significant effect to the sort.
Actual Question
Does such an algorithm exists?
Is it possible to use Locality-Sensitive Hashing (LSH) for that aim and if so, can you formulate the respective algorithm. (I didn't understood the algorithm right now)
Thank you, guys!

Fewest subsets with sum less than N

I have a specific sub-problem for which I am having trouble coming up with an optimal solution. This problem is similar to the subset sum group of problems as well as space filling problems, but I have not seen this specific problem posed anywhere. I don't necessarily need the optimal solution (as I am relatively certain it is NP-hard), but an effective and fast approximation would certainly suffice.
Problem: Given a list of positive valued integers find the fewest number of disjoint subsets containing the entire list of integers where each subset sums to less than N. Obviously no integer in the original list can be greater than N.
In my application I have many lists and I can concatenate them into columns of a matrix as long as they fit in the matrix together. For downstream purposes I would like to have as little "wasted" space in the resulting ragged matrix, hence the space filling similarity.
Thus far I am employing a greedy-like approach, processing from the largest integers down and finding the largest integer that fits into the current subset under the limit N. Once the smallest integer no longer fits into the current subset I proceed to the next subset similarly until all numbers are exhausted. This almost certainly does not find the optimal solution, but was the best I could come up with quickly.
BONUS: My application actually requires batches, where there is a limit on the number of subsets in each batch (M). Thus the larger problem is to find the fewest batches where each batch contains M subsets and each subset sums to less than N.
Straight from Wikipedia (with some bold amendments):
In the bin packing problem, objects [Integers] of different volumes [values] must be
packed into a finite number of bins [sets] or containers each of volume V [summation of the subset < V] in
a way that minimizes the number of bins [sets] used. In computational
complexity theory, it is a combinatorial NP-hard problem.
https://en.wikipedia.org/wiki/Bin_packing_problem
As far as I can tell, this is exactly what you are looking for.

Algorithm for generating a size k error-correcting code on n bits

I want to generate a code on n bits for k different inputs that I want to classify. The main requirement of this code is the error-correcting criteria: that the minimum pairwise distance between any two encodings of different inputs is maximized. I don't need it to be exact - approximate will do, and ease of use and speed of computational implementation is a priority too.
In general, n will be in the hundreds, k in the dozens.
Also, is there a reasonably tight bound on the minimum hamming distance between k different n-bit binary encodings?
The problem of finding the exact best error-correcting code for given parameters is very hard, even approximately best codes are hard. On top of that, some codes don't have any decent decoding algorithms, while for others the decoding problem is quite tricky.
However, you're asking about a particular range of parameters where n ≫ k, where if I understand correctly you want a k-dimensional code of length n. (So that k bits are encoded in n bits.) In this range, first, a random code is likely to have very good minimum distance. The only problem is that decoding is anywhere from impractical to intractible, and actually calculating the minimum distance is not that easy either.
Second, if you want an explicit code for the case n ≫ k, then you can do reasonably well with a BCH code with q=2. As the Wikipedia page explains, there is a good decoding algorithm for BCH codes.
Concerning upper bounds for the minimum Hamming distance, in the range n ≫ k you should start with the Hamming bound, also known as the volume bound or the sphere packing bound. The idea of the bound is simple and beautiful: If the minimum distance is t, then the code can correct errors up to distance floor((t-1)/2). If you can correct errors out to some radius, it means that the Hamming balls of that radius don't overlap. On the other hand, the total number of possible words is 2n, so if you divide that by the number of points in one Hamming ball (which in the binary case is a sum of binomial coefficients), you get an upper bound on the number of error-free code words. It is possible to beat this bound, but for large minimum distance it's not easy. In this regime it's a very good bound.

Resources