What is the ..= (dot dot equals) operator in Rust? - syntax

I saw this ..= operator in some Rust code:
for s in 2..=9 {
// some code here
}
What is it?

This is the inclusive range operator.
The range x..=y contains all values >= x and <= y, i.e. “from x up to and including y”.
This is in contrast to the non-inclusive range operator x..y, which doesn't include y itself.
fn main() {
println!("{:?}", (10..20) .collect::<Vec<_>>());
println!("{:?}", (10..=20).collect::<Vec<_>>());
}
// Output:
//
// [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
// [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
Match expressions
You can also use start..=end as a pattern in a match expression to match any value in the (inclusive) range.
match fahrenheit_temperature {
70..=89 => println!("What lovely weather!"),
_ => println!("Ugh, I'm staying in."),
}
(Using an exclusive range start..end as a pattern is an experimental feature.)
History
Inclusive ranges used to be an experimental nightly-only feature, and were written ... before.
As of Rust 1.26, it's officially part of the language, and written ..=.
(Before inclusive ranges existed, you actually couldn't create, say, a range of byte values including 255u8. Because that'd be 0..256, and 256 is out of the u8 range! This is issue #23635.)
See also
The Rust 1.26 release blog post's introduction to ..=.
StackOverflow: How do I include the end value in a range?

Related

Restricting variables to a domain in z3

Restricting BitVec's to the values of a list doesn't work as I expected, at least not by using in.
from z3 import *
s = Solver()
lst = [7, 11, 13, 14, 19, 21, 22, 25, 26, 28, 35, 37, 38, 41, 42, 44, 49, 50]
BV = [BitVec(f"bv1{j + 1}", 8) for j in range(11)]
lst_as_domain = [bv in lst for bv in BV]
s.add(lst_as_domain)
print(lst_as_domain) #[False, False, False, False, False, False, False, False, False, False, False]
print(s.check()) #unsat
If I use list comprehension as follows, it works.
from z3 import *
s = Solver()
lst = [7, 11, 13, 14, 19, 21, 22, 25, 26, 28, 35, 37, 38, 41, 42, 44, 49, 50]
BV = [BitVec(f"bv{j + 1}", 8) for j in range(11)]
lst_as_domain = [Or([B[k] == li for li in lst]) for k in range(11)]
s.add(lst_as_domain)
print(lst_as_domain) #[Or(bv1 == 7, bv1 == 11,... ,bv1 == 50), Or(bv2 == 7,...)..]
print(s.check()) #sat
print(s.model()) #[bv4 = 42, bv7 = 37,..., bv11 = 41]
Why doesn't the first code yield my desired restriction? How can I use in to assert a domain to variables, or is there a short command to achieve this?
Python's built-in in method does not do what you think it should do on symbolic expressions. This is a problem of the very loosely-typed nature of the z3 python bindings: Instead of doing symbolic equality, it checks for object equality, and always get False as an answer which you found when you printed lst_as_domain.
The solution is what you already found. Do not use in. For reuse purposes, I'd define a function like:
def member(x, es):
return Or([x == e for e in es])
And then use it as:
lst_as_domain = [member(bv, lst) for bv in BV]
which will do the right thing and is "close" enough to what you wanted to write in the first place.
This is a common gotcha for the Python bindings, unfortunately. While it tries to make symbolic z3 expressions look and behave like Python expressions themselves, it doesn't always work due to limitations in Python and the z3-Python API itself; which makes it error-prone to use unless you're very careful about what methods are overloaded to work on symbolic expressions and which are not.
Aside: Unfortunately there's no easy way to tell which constructs will work on symbolic values out-of-the-box. You have to study how they're implemented internally. Rule-of-thumb: Anything that Python doesn't allow you to overload, you cannot use on symbolic values. But that's not an easy test, admittedly.

Why do certain Time comparisons have counterintuitive results?

In Ruby 2.1.2, I can successfully compare the result of Time.parse and Time.utc for the same time, and it returns the expected true:
Time.parse("2015-02-09T22:38:43Z") == Time.utc(2015, 2, 9, 22, 38, 43)
=> true
However, this same comparison counterintuitively returns false when the second value is not an integer:
Time.parse("2015-02-09T22:38:43.1Z") == Time.utc(2015, 2, 9, 22, 38, 43.1)
=> false
This is despite the fact that the second values are still integers and are still equivalent:
Time.parse("2015-02-09T22:38:43.1Z").sec
=> 43
Time.utc(2015, 2, 9, 22, 38, 43.1).sec
=> 43
Time.parse("2015-02-09T22:38:43.1Z").sec == Time.utc(2015, 2, 9, 22, 38, 43.1).sec
=> true
Moreover, the comparison results in true between successive calls of the same methods:
Time.parse("2015-02-09T22:38:43.1Z") == Time.parse("2015-02-09T22:38:43.1Z")
=> true
Time.utc(2015, 2, 9, 22, 38, 43.1) == Time.utc(2015, 2, 9, 22, 38, 43.1)
=> true
Why is this so? Is this a bug, or am I missing something?
Ruby compares fractional seconds as well as seconds when comparing times. For some reason your times receive different fractional seconds:
Time.parse("2015-02-09T22:38:43.1Z").subsec
# => (1/10)
Time.utc(2015, 2, 9, 22, 38, 43.1).subsec
# => (14073748835533/140737488355328)
I believe you are running into an issue with precision. The reason the integer seconds compare works is due to the precision of integers, and Ruby performed a .to_i on the parsed version for you.
Underlying the Time class is an integer that represents a very precise integer number, and parsing and explicit values might be treated just differently enough to cause problems. If sub-second precision is not important to you, then it would be best to compare seconds:
Time.parse("2015-02-09T22:38:43.1Z").to_i == Time.utc(2015, 2, 9, 22, 38, 43.1).to_i
In the above case you are comparing seconds since the Unix epoch (Jan 1, 1970)
Another option would be to create a function to compare two times within a certain precision. Many unit testing frameworks provide that feature. Essentially if T1 == T2 within 0.1 seconds, it's good enough.
The easiest way to do a "within" comparison would be like this:
def within(time1, time2, precision)
return (time1 - time2).abs < precision
end
NOTE: the above works with time, floating points, and fractions.

Take the last element of a lazy enumerator

I have a function like this:
(0..Float::INFINITY).lazy.take_while {|n|(n**2+ 1*n+41).prime?}.force[-1]
I'm using this as an optimisation exercise. This works fine, but it has a memory order O(n) as it will create the entire array and then take the last element.
I am trying to get this without building the entire list, hence the lazy enumerator. I can't think of anything other than using a while loop.
(0..Float::INFINITY).lazy.take_while {|n|(n**2+ 1*n+41).prime?}.last.force
Is there a way to do this in space order O(1) rather than O(n) with enumerators?
EDIT: lazy isn't necessary here for the example to work, but I thought it might be more useful to reduce the space complexity of the function?
If you just don't want to save the entire array:
(0..1.0/0).find {|n| !(n**2+n+41).prime?} - 1
1.0/0 is the same as Float::INFINITY. I used it in case you hadn't seen it. So far as I know, neither is preferable.
My first thought clearly was overkill:
def do_it
e = (0..1.0/0).to_enum
loop do
n = e.peek
return e.inspect unless (n**2+n+41).prime?
e.next
end
end
do_it
Solution
Use inject to hold on to the current value instead of building an array.
(0..Float::INFINITY).lazy.take_while {|n|(n**2+ 1*n+41).prime?}.inject{|acc,n| n }
Note that you must use lazy otherwise inject will build an intermediate array.
Verifying
To see what happens if you don't use lazy, run the following after restarting ruby & running the non-lazy version. It will return arrays that "look like" the intermediate array.
ObjectSpace.enum_for(:each_object, Array).each_with_object([]) {|e, acc|
acc << e if e.size == 40 and e.first == 0
}
The non-lazy version will return:
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]]
Re-doing the test with lazy will return an empty array.

Efficient way to figure out which bit is set in a 64 bit quantity

This is effectively log base 2, but I do not have access to this functionality in the environment I'm in. Manually walking through the bits to verify them is unacceptably slow. If it were just 4 bits, I could probably index it and waste some space in an array, but with 64 bits it is not viable.
Any clever constant time method to find which bit is set ? (The quantity is a 64-bit number).
EDIT: To clarify, there is a single bit set in the number.
I assume you want the position of the most significant bit that is set. Do a binary search. If the entire value is 0, no bits are set. If the top 32 bits are 0, then the bit is in the bottom 32 bits; else it is in the high half. Then recurse on the two 16-bit halves of the appropriate 32 bits. Recurse until you are down to a 4-bit value and use your look-up table. (Or recurse down to a 1-bit value.) You just need to keep track of which half you used at each recursion level.
The fastest method I know of uses a DeBruijn Sequence.
Find the log base 2 of an N-bit integer in O(lg(N)) operations with multiply and lookup
Note that in lg(N), N is the number of bits, not the number of the highest set bit. So it's constant time for any N-bit number.
If you know that the number is an exact power of 2 (i.e. there is only 1 bit set), there is an even faster method just below that.
That hack is for 32 bits. I seem to recall seeing a 64 bit example somewhere, but can't track it down at the moment. Worst case, you run it twice: once for the high 32 bits and once for the low 32 bits.
If your numbers are powers of 2 and you have a bit count instruction you could do:
bitcount(x-1)
e.g.
x x-1 bitcount(x-1)
b100 b011 2
b001 b000 0
Note this will not work if the numbers are not powers of 2.
EDIT
Here is a 64bit version of the De Brujin method:
static const int log2_table[64] = {0, 1, 2, 7, 3, 13, 8, 19, 4, 25, 14, 28, 9, 34,
20, 40, 5, 17, 26, 38, 15, 46, 29, 48, 10, 31,
35, 54, 21, 50, 41, 57, 63, 6, 12, 18, 24, 27,
33, 39, 16, 37, 45, 47, 30, 53, 49, 56, 62, 11,
23, 32, 36, 44, 52, 55, 61, 22, 43, 51, 60, 42, 59, 58};
int fastlog2(unsigned long long x) {
return log2_table[ ( x * 0x218a392cd3d5dbfULL ) >> 58 ];
}
Test code:
int main(int argc,char *argv[])
{
int i;
for(i=0;i<64;i++) {
unsigned long long x=1ULL<<i;
printf("0x%llu -> %d\n",x,fastlog2(x));
}
return 0;
}
The magic 64bit number is an order 6 binary De Brujin sequence.
Multiplying by a power of 2 is equivalent to shifting this number up by a certain number of places.
This means that the top 6 bits of the multiplication result correspond to a different subsequence of 6 digits for each input number. The De Brujin sequence has the property that each subsequence is unique, so we can construct an appropriate lookup table to turn back from subsequence to position of the set bit.
If you use some modern Intel CPU, you can use hardware
supported "POPulation CouNT" assembly instruction:
http://en.wikipedia.org/wiki/SSE4#POPCNT_and_LZCNT
for Unix/gcc, you can use macro:
#include <smmintrin.h>
uint64_t x;
int c = _mm_popcnt_u64(x);

Compare all elements inside a 2D array with each other

I have a perfectly square 64x64 2D array of integers that will never have a value greater than 64. I was wondering if there is a really fast way to compare all of the elements with each other and display the ones that are the same, in a unique way.
At the current moment I have this
2D int array named array
loop from i = 0 to 64
loop from j = 0 to 64
loop from k = (j+1) to 64
loop from z = 0 to 64
if(array[i][j] == array[k][z])
print "element [i][j] is same as [k][z]
As you see having 4 nested loops is quite a stupid thing that I would like not to use. Language does not matter at all whatsoever, I am just simply curious to see what kind of cool solutions it is possible to use. Since value inside any integer will not be greater than 64, I guess you can only use 6 bits and transform array into something fancier. And that therefore would require less memory and would allow for some really fancy bitwise operations. Alas I am not quite knowledgeable enough to think in that format, and therefore would like to see what you guys can come up with.
Thanks to anyone in advance for a really unique solution.
There's no need to sort the array via an O(m log m) algorithm; you can use an O(m) bucket sort. (Letting m = n*n = 64*64).
An easy O(m) method using lists is to set up an array H of n+1 integers, initialized to -1; also allocate an array L of m integers each, to use as list elements. For the i'th array element, with value A[i], set k=A[i] and L[i]=H[k] and H[k]=i. When that's done, each H[k] is the head of a list of entries with equal values in them. For 2D arrays, treat array element A[i,j] as A[i+n*(j-1)].
Here's a python example using python lists, with n=7 for ease of viewing results:
import random
n = 7
m = n*n
a=[random.randint(1,n) for i in range(m)]
h=[[] for i in range(n+1)]
for i in range(m):
k = a[i]
h[k].append(i)
for i in range(1,n+1):
print 'With value %2d: %s' %(i, h[i])
Its output looks like:
With value 1: [1, 19, 24, 28, 44, 45]
With value 2: [3, 6, 8, 16, 27, 29, 30, 34, 42]
With value 3: [12, 17, 21, 23, 32, 41, 47]
With value 4: [9, 15, 36]
With value 5: [0, 4, 7, 10, 14, 18, 26, 33, 38]
With value 6: [5, 11, 20, 22, 35, 37, 39, 43, 46, 48]
With value 7: [2, 13, 25, 31, 40]
class temp {
int i, j;
int value;
}
then fill your array in class temp array[64][64], then sort it by value (you can do this in Java by implementing a comparable interface). Then the equal element should be after each other and you can extract i,j for each other.
This solution would be optimal, categorizing as a quadratic approach for big-O notation.
Use quicksort on the array, then iterate through the array, storing a temporary value of the "cursor" (current value you're looking at), and determine if the temporary value is the same as the next cursor.
array[64][64];
quicksort(array);
temp = array[0][0];
for x in array[] {
for y in array[][] {
if(temp == array[x][y]) {
print "duplicate found at x,y";
}
temp = array[x][y];
}
}

Resources