Speed boost to adjacency matrix - performance

I currently have an algorithm that operates on an adjacency matrix of size n by m. In my algorithm, I need to zero out entire rows or columns at a time. My implementation is currently O(m) or O(n) depending on if it's a column or row.
Is there any way to zero out a column or row in O(1) time?

Essentially this depends on the Chip architecture that you're dealing with. For most CPUs, it isn't possible to zero out whole swathes of memory at go, and therefore each word will require a separate memory operation, no matter what facilities your programming language provides.
It helps tremendously if your memory is contiguous for memory access time, because memory adjacent to memory just accessed will be cached, and subsequent accesses will hit the cache, resulting in fast performance.
The result of this is that if your matrix is large, it may be faster to zero out a row at a time or a column at a time, rather than vice versa, depending on whether your data is written by column or by row.
EDIT: I have assumed that your matrices aren't sparse, or triangular, or otherwise special, since you talk about "zeroing out a whole row". If you know that your matrix is mostly empty or somehow otherwise fits a special pattern, you would be able to represent your matrix in a different way (not a simple nxm array) and the story would be different. But if you have an nxm matrix right now, then this is the case.

Is the distance metric and is the graph undirected? (in this case the matrix is symmetric). In that case you could just operate on lower or upper triangular matrices throughout the program. In this way you just have to 0 out one row (or column if you are dealing with upper triangular). and even then it wont be a whole row, on average half.

It depends on how your matrices are implemented.
If you have a representation such as an array of arrays, you can point to a shared zeroed element array, as long as you check you don't subsequently write to it. Which means one out of a row or column can be zeroed in O(N), with a constant cost on all other write operations.
You also could have a couple of arrays - one for rows, one for columns - which scale the values in the matrix. Putting a zero in either would be a O(1) operation to mask out a row or column, at the cost of extra processing for every read; but it may be worth it as a way of temporarily removing a node from the graph if that's a common use case. It also leaves the original version of the matrix untouched, so you could parallelise your algorithm (assuming the only operation it requires is pruning all edges into or out of a node).

Related

Data structure / algorithms for getting best and worst-scoring object from set

My algorithm runs a loop where a set of objects is maintained. In each iteration there are objects being added and removed from the set. Also, there are some "measures" (integer values, possibly several of them) for each object, which can change at any time. From those measures, a score can be calculated based on the measures and the iteration number.
Whenever the number of objects passes a certain threshold, I want to identify and remove the lowest-scoring objects until the number of objects is again below that threshold. That is: if there are n objects with threshold t, if n>t then remove the n-t lowest-scoring objects.
But also, periodically I want to get the highest-scoring
I'm really at a loss as to what data structure I should use here to do this efficiently. A priority queue doesn't really work as measures are changed all the time and anyway the "score" I want to use can be any arbitrarily complex function of those measures and the current iteration number. The obvious approach is probably a hash-table storing associations object -> measures, with amortized O(1) add/remove/update operations, but then finding the lowest or highest scoring objects would be O(n) in the number of elements. n can be easily in the millions after a short while so this isn't ideal. Is this the best I can do?
I realise this probably isn't very trivial but I'd like to know if anyone has any suggestions as to how this could be best implemented.
PS: The language is OCaml but it really doesn't matter.
For this level of generality the best would be to have something for quick access to the measures (storing them in object or via a pointer would be best, but a hash-table would also work) and having an additional data-structure for keeping an ordered view of your objects.
Every time you update the measures you would want to refresh the score and update the ordered data-structure. Something like a balanced BST would work well (RB-tree, AVL) and would guarantee LogN update complexity.
You can also keep a min-max heap instead of the BST. This has the advantage of using less pointers, which should lower the overhead of the solution. Complexity remains LogN per update.
You've mentioned that the score depends on iteration number. This is bad for performance because it requires all entries to be updated every iteration. However, if you can isolate the impact (say the score is g(all_metrics) - f(iteration_number)) so that all elements are impacted the same then the relative order should remain consistent and you can skip updating the score every iteration.
If it's not constant, but it's still isolated (something like f(iteration_number, important_time)) you can use the balanced BST and calculate when the iteration will swap each element with one of it's neighbours, then keep the swap times in a heap, and only update the elements that would swap.
If it's not isolated at all then you would need at each iteration to update all the elements, so you might as well keep track of the highest value and the lowest ones when you go through them to recompute the scores. This at least will have a complexity of O(NlogK) where K is the number of lowest values you want to remove (hopefully it's very small so it should behave almost like O(N)).

Gomoku Board representation

I'm working on a Gomoku game and I need an efficient data structure to store the boards state,
I've thought about storing it in a 2D array, but I'm sure that there is a more efficient way.
Thanks
In terms of time efficiency, since I believe you'll mainly be doing index lookups, an array would be pretty much the best choice - it supports this lookup in constant time, with a low constant factor.
In terms of space efficiency:
Each square can be either empty, or populated by either player. So there are a maximum of 3 possibilities. For maximum space efficiency, we could store our entire board in base-3 representation, but, since a computer works in binary, we'd need to process the entire board to determine the value of some given square (thus a simply index lookup will take time proportional to the size of the board - if time really isn't a problem, you could consider this). Instead, I'd recommend using 2 bits per square, which would allow us to indicate one of 4 possibilities (the 4th being unused).
Many languages have some sort of bitset implementation, allowing you to work with an array of bits, which would be perfect for the above.
You'd also just want a single bitset (not 2D) as there's usually a bit of memory overhead involved in working with 2D structures. The conversion from 2D to 1D is simple - we could convert the 2D index to 1D with either x*height + y or y*width + x.
Although I'd recommend first being sure that you need to perform this optimization - I believe Gomoku boards are typically small, so even a bulky representation would work perfectly (although some AI techniques make many copies of the board, so, if you're doing that, a minimal representation would make sense).

Why permutation matrices are used to swap rows of an array?

What are the advantages of using a permutation matrix to swap rows? Why one would create a permutation matrix and then apply a matrix multiplication, is it easier and more efficient than just swapping rows with a for loop?
Permutation matrices are a useful mathematical abstraction, because they allow analysis using the normal rules of matrix algebra, without having to introduce another type of operation.
In software, good implementations do not store a permutation matrix as a full matrix, they store a permutation array and they apply it directly (without a full matrix multiplication).
Depending on the sizes of the matrices and the operations and access patterns involved, it may be cheaper not to apply the permutation to the data in memory at all, but just to use it as an extra indirection. So, when you request (P * M)(i,j), where P is a permutation matrix and M is some other matrix that you are permuting, the data need not be re-arranged at all, but rather the element access operation will look up the permuted row when you access the element.
The first thing that comes into my mind is the issue called "spatial locality". Caching technologies assume that if a memory location is accessed, it is probable to access the nearby locations of the memory. In some programming languages, elements in rows are neighbors whereas elements in columns are neighbors in others. It depends on the implementation. I guess permutation matrices are designed to solve this problem, since optimization of matrix multiplication is one of the problems that algorithms academia mostly works on improving. Simple loop structure will not be able to make use of cache technologies to improve performance.

data structure for NFA representation

In my lexical analyzer generator I use McNaughton and Yamada algorithm for NFA construction, and one of its properties that transition form I to J marked with char at J position.
So, each node of NFA can be represented simply as list of next possible states.
Which data structure best suit for storing this type of data? It must provide fast lookup for all possible states and use less space, but insertion time is not so important.
My understanding is that you want to encode a graph, where the nodes are states and the edges are transitions, and where every edge is labelled with a character. Is that correct?
The dull but practical answer is to have a object for each state, and to encode the transitions in some little structure in that object.
The simplest one would be an array, indexed by character code: that's as fast as it gets, but not naturally space-efficient. You can make it more space efficient by using a sort of offset, truncated array: store only the part of the array which contains transitions, along with the start and end indices of that part. When looking up a character in it, check that its code is within the bounds; if it isn't, treat it as a null edge (or an edge back to the start state or whatever), and if it is, fetch the element at index (character code - start). Does that make sense?
A more complex option would be a little hashtable, which would be more compact but slightly slower. I would suggest closed hashing, because collision lists will use too much memory; linear probing should be enough. You could look into using perfect hashing (look it up), which takes a lot of time to generate the table but then gives collision-free lookup. The generation process is quite complex, though.
A clever approach is to use both arrays and hashtables, and to pick one or the other based on the number of edges: if the compacted array would be more than, say, a third full, use it, but if not, use a hashtable.
Now, something a bit more radical you could do would be to use arrays, but to overlap them - if they're sparse, they'll have lots of holes in, and if you're clever, you can arrange them so that the entries in each array lines up with holes in the others. That will give you fast lookups, but also excellent memory efficiency. You will need some scheme for distinguishing when a lookup has found something from when it's found an empty slot with some other state's transition in, but i'm sure you can think of something.

Efficiently estimating the number of unique elements in a large list

This problem is a little similar to that solved by reservoir sampling, but not the same. I think its also a rather interesting problem.
I have a large dataset (typically hundreds of millions of elements), and I want to estimate the number of unique elements in this dataset. There may be anywhere from a few, to millions of unique elements in a typical dataset.
Of course the obvious solution is to maintain a running hashset of the elements you encounter, and count them at the end, this would yield an exact result, but would require me to carry a potentially large amount of state with me as I scan through the dataset (ie. all unique elements encountered so far).
Unfortunately in my situation this would require more RAM than is available to me (nothing that the dataset may be far larger than available RAM).
I'm wondering if there would be a statistical approach to this that would allow me to do a single pass through the dataset and come up with an estimated unique element count at the end, while maintaining a relatively small amount of state while I scan the dataset.
The input to the algorithm would be the dataset (an Iterator in Java parlance), and it would return an estimated unique object count (probably a floating point number). It is assumed that these objects can be hashed (ie. you can put them in a HashSet if you want to). Typically they will be strings, or numbers.
You could use a Bloom Filter for a reasonable lower bound. You just do a pass over the data, counting and inserting items which were definitely not already in the set.
This problem is well-addressed in the literature; a good review of various approaches is http://www.edbt.org/Proceedings/2008-Nantes/papers/p618-Metwally.pdf. The simplest approach (and most compact for very high accuracy requirements) is called Linear Counting. You hash elements to positions in a bitvector just like you would a Bloom filter (except only one hash function is required), but at the end you estimate the number of distinct elements by the formula D = -total_bits * ln(unset_bits/total_bits). Details are in the paper.
If you have a hash function that you trust, then you could maintain a hashset just like you would for the exact solution, but throw out any item whose hash value is outside of some small range. E.g., use a 32-bit hash, but only keep items where the first two bits of the hash are 0. Then multiply by the appropriate factor at the end to approximate the total number of unique elements.
Nobody has mentioned approximate algorithm designed specifically for this problem, Hyperloglog.

Resources