What is the currently recommended method for sorting values in a vector?
A mutable slice of elements with a total ordering has a sort method.
Because Vec<T> implements DerefMut<[T]>, you can call this method directly on a vector, so vector.sort() works.
To sort a vector v, in most cases v.sort() will be what you need.
If you want to apply a custom ordering rule, you can do that via v.sort_by(). That includes cases where you want to sort values that:
don't implement Ord (such as f64, most structs, etc);
do implement Ord, but you want to apply a specific non-standard ordering rule.
Also note that sort() and sort_by() use a stable sorting algorithm (i.e., equal elements are not reordered). If you don't need a stable sort, you can use sort_unstable() / sort_unstable_by(), as those are generally a bit faster and use less memory.
While the solutions proposed above can sort vectors of integers I had problems in sorting vectors of floats.
The simplest solution was to use the quickersort crate, which can also sort floats.
The quickersort crate can also sort other vectors of any type and also implements methods to sort using comparisons (sort_by).
Following is the Rust code:
extern crate quickersort;
//let's create the vector with the values
let mut vals = Vec::new();
vals.push(31.2);
vals.push(31.2);
vals.push(10.0);
vals.push(100.4);
vals.push(4.1);
quickersort::sort_floats(&mut vals[..]); // sort the vector
Related
It might not be evident, but Prolog also offers arrays out of the box. A Prolog compound has a functor and a number of arguments. This means we could represent an array such as:
[[1,2],[3,4]]
Replacing the Prolog lists by the following Prolog compounds:
matrice(vector(1,2), vector(3,4))
The advantage would be faster element access from an integer index. Can this representation be used to realize a matrix multiplication?
There is yet another approach, as implemented in R (the statistical environment). The dimensions of the array and the values are kept separately. So your square could also be represented as:
array(dims(2, 2), v(1,2,3,4))
This approach has some (questionable) benefits and drawbacks. You can start reading here, if you are at all interested: https://stat.ethz.ch/R-manual/R-devel/library/base/html/dim.html
To your question, yes, you can implement matrix multiplication, regardless on how you decide to represent the matrix. It would be interesting to see how the two approaches (array of arrays vs. one array and calculating indexes from the dimensions) compare in terms of efficiency.
What algorithm do you want to use for the matrix multiplication? Is it any of the ones described here: https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm?
EDIT: do you want to allow the client code to be able to provide the product and sum operations? Do you want to allow specialization of the values? For example, if you want to use matrix multiplication for finding the transitive closure of a graph, you could represent the boolean square matrix as an unbounded integer. This will make the matrix itself at least quite small.
Is there any functionality in Mathematica that let's the user work directly with matrix objects (potentially of non-defined size), eg. solving a matrix equality for the matrix object without necessarily specifying all elements of the matrix. What I want to do is to manipulate matrix equations while using the matrix objects as elemental, so eg solving a matrix equation for a matrix object (and not for all its elements explicitly.
As a simple example of what I mean: say I'd want to check if two matrix inequalities are equivalent eg. and let's say there is a function matrix[A] that declares that A may be of dimension >2.
matrix[A]
matrix[B]
matrix[C]
In principle that would have to be like an element function, like:
element[A, Reals[dim=n]]
Then there should be a function MatrixSolve[] such that
In: Assuming[A is square and det[A]==0, MatrixSolve[A*B==C,B]]
Out:B->A^(-1)*C.
Or for example:
Q:=A*B *so Q must also be a matrix*
In: Assuming[again the necessary stuff like A square..., Q*A^(-1)===B]]
Out: True
I have not found any such functionalities in the documentation (or in the search function on SE) and was wondering if there is a way to implement this and, if not, why such features would not exist.
Properties of the list:
must be of size N, where N is the amount of integers
no empty cells
numbers may not be perfectly sequential (i.e. {-23,-15,-3,1,2,6,7,8,15,100})
Insertion/Lookup needs to be in constant time.
My first instinct was to use a hash table, but this would create unused cells where numbers are skipped.
Is there any way such a list can be constructed to check in constant time if a number exists in that list?
Following my comments, you can go with a Set, depending on your exact use case you can check out things like Java's HashSet or LinkedHashSet if you need to maintain the order which according to the doc is supposed to be constant time:
Like HashSet, it provides constant-time performance for the basic
operations (add, contains and remove), assuming the hash function
disperses elements properly among the buckets.
If you are looking for solutions on other platforms maybe there are equivalent implementations or you can check Java's source code and implement it yourself.
When implementing a bloom filter there are a few potential moving parts:
m = size of bit vector
n = items (expected to be) inserted into filter
k = number of hashes to be used
I understand that there are optimum relationships between m/n and k however I haven't found a clear explanation of how to map k hashes onto the bit vector for larger values of m.
In nearly every example I read people use values of m that are trivial (>256) and they show the hash functions heavily overlapping. For less than 256bits it's easy to imagine having k 256bit hash functions and ORing them to the vector.
As m gets larger to reduce the false positive rate for large values of n I'm not sure how the hashes should be mapped to the vector. I've seen hint of ideas such as partitioning the vector and applying "independent" (e.g. different murmur seeds) hashes to each 128bit section of the vector. However I haven't seen a concrete example of how to implement bloom filters for larger n/m values.
When I studied Bloom filters, the page below has helped me a lot:
http://matthias.vallentin.net/blog/2011/06/a-garden-variety-of-bloom-filters/
All described there is implemented as an open-source library. Looking at its sources also was very helpful.
When there are that amount of elements I would rather rely on simplified Cryptography hash functions :)
I want to do some calculations with matrices of arbitrary size. Simple example - take two matrices NxM and MxK, with arbitrary elements, and see element of product as sum.
But i cant find a way to do such symbolic calculations without specifying matrix size as integer.
matrix() want integer, makelist() want integer.
Is there a way to do things like this in maxima? Or any CAS?
Unfortunately, Maxima does not know about arbitrary-size matrices, and I don't see an easy way to implement it.
The only way that I see is to define a new kind of expression, and provide simplification rules for operations on them. E.g. (and this is just a sketch of a possible solution): use defstruct to define a structure comprising size and a formula for a typical element, and define a simplification rule for "." (noncommutative multiplication) which creates a new expression with a typical element which is a summation.