Restricting BitVec's to the values of a list doesn't work as I expected, at least not by using in.
from z3 import *
s = Solver()
lst = [7, 11, 13, 14, 19, 21, 22, 25, 26, 28, 35, 37, 38, 41, 42, 44, 49, 50]
BV = [BitVec(f"bv1{j + 1}", 8) for j in range(11)]
lst_as_domain = [bv in lst for bv in BV]
s.add(lst_as_domain)
print(lst_as_domain) #[False, False, False, False, False, False, False, False, False, False, False]
print(s.check()) #unsat
If I use list comprehension as follows, it works.
from z3 import *
s = Solver()
lst = [7, 11, 13, 14, 19, 21, 22, 25, 26, 28, 35, 37, 38, 41, 42, 44, 49, 50]
BV = [BitVec(f"bv{j + 1}", 8) for j in range(11)]
lst_as_domain = [Or([B[k] == li for li in lst]) for k in range(11)]
s.add(lst_as_domain)
print(lst_as_domain) #[Or(bv1 == 7, bv1 == 11,... ,bv1 == 50), Or(bv2 == 7,...)..]
print(s.check()) #sat
print(s.model()) #[bv4 = 42, bv7 = 37,..., bv11 = 41]
Why doesn't the first code yield my desired restriction? How can I use in to assert a domain to variables, or is there a short command to achieve this?
Python's built-in in method does not do what you think it should do on symbolic expressions. This is a problem of the very loosely-typed nature of the z3 python bindings: Instead of doing symbolic equality, it checks for object equality, and always get False as an answer which you found when you printed lst_as_domain.
The solution is what you already found. Do not use in. For reuse purposes, I'd define a function like:
def member(x, es):
return Or([x == e for e in es])
And then use it as:
lst_as_domain = [member(bv, lst) for bv in BV]
which will do the right thing and is "close" enough to what you wanted to write in the first place.
This is a common gotcha for the Python bindings, unfortunately. While it tries to make symbolic z3 expressions look and behave like Python expressions themselves, it doesn't always work due to limitations in Python and the z3-Python API itself; which makes it error-prone to use unless you're very careful about what methods are overloaded to work on symbolic expressions and which are not.
Aside: Unfortunately there's no easy way to tell which constructs will work on symbolic values out-of-the-box. You have to study how they're implemented internally. Rule-of-thumb: Anything that Python doesn't allow you to overload, you cannot use on symbolic values. But that's not an easy test, admittedly.
Related
What I have is a matrix, I need to orthogonolize its eigen vectors.
That is basically all I need, but in exact form.
So here is my wolfram input
(orthogonolize(eigenvectors({{146, 112, 78, 17, 122}, {112, 86, 60, 13, 94}, {78, 60, 42 , 9, 66}, {17, 13, 9, 2, 14}, {122, 94, 66, 14, 104}})))
That gives me float numbers, while I need the exact forms.
Any ways to fix this?
Wolfram Mathematica, not WolframAlpha which is a completely different product with different rules and gives different results, given this
FullSimplify[Orthogonalize[Eigenvectors[{
{146, 112, 78, 17, 122}, {112, 86, 60, 13, 94}, {78, 60, 42 , 9, 66},
{17, 13, 9, 2, 14}, {122, 94, 66, 14, 104}}]]]
returns this exact form
{{Sqrt[121/342 + 52/(9*Sqrt[35587])], Sqrt[5/38 + 18/Sqrt[35587]],
Sqrt[25/342 + 64/(9*Sqrt[35587])], Sqrt[7/38 - 26/Sqrt[35587]]/3,
2*Sqrt[2/19 - 7/Sqrt[35587]]},
{-1/3*Sqrt[121/38 - 52/Sqrt[35587]], -Sqrt[5/38 - 18/Sqrt[35587]],
Sqrt[25/38 - 64/Sqrt[35587]]/3, -1/3*Sqrt[7/38 + 26/Sqrt[35587]],
Sqrt[8/19 + 28/Sqrt[35587]]},
{3/Sqrt[35], -Sqrt[5/7], 0, 0, 1/Sqrt[35]},
{-11/Sqrt[5110], -Sqrt[5/1022], 0, Sqrt[70/73], 4*Sqrt[2/2555]},
{-17/(3*Sqrt[2774]), -7/Sqrt[2774], Sqrt[146/19]/3, Sqrt[2/1387]/3, -9*Sqrt[2/1387]}}
Think of at least two different ways you can check that for correctness before you depend on that.
The last three of those can be simplified somewhat
1/Sqrt[35]*{3,-5,0,0,1},
1/Sqrt[5110]*{-11,-5,0,70,8},
1/(3*Sqrt[2774])*{-17,-21,146,2,-54}
but I cannot yet see a way to simplify the first two to a third of their current size. Can anyone else see a way to do that? Please check these results very carefully.
I have a function like this:
(0..Float::INFINITY).lazy.take_while {|n|(n**2+ 1*n+41).prime?}.force[-1]
I'm using this as an optimisation exercise. This works fine, but it has a memory order O(n) as it will create the entire array and then take the last element.
I am trying to get this without building the entire list, hence the lazy enumerator. I can't think of anything other than using a while loop.
(0..Float::INFINITY).lazy.take_while {|n|(n**2+ 1*n+41).prime?}.last.force
Is there a way to do this in space order O(1) rather than O(n) with enumerators?
EDIT: lazy isn't necessary here for the example to work, but I thought it might be more useful to reduce the space complexity of the function?
If you just don't want to save the entire array:
(0..1.0/0).find {|n| !(n**2+n+41).prime?} - 1
1.0/0 is the same as Float::INFINITY. I used it in case you hadn't seen it. So far as I know, neither is preferable.
My first thought clearly was overkill:
def do_it
e = (0..1.0/0).to_enum
loop do
n = e.peek
return e.inspect unless (n**2+n+41).prime?
e.next
end
end
do_it
Solution
Use inject to hold on to the current value instead of building an array.
(0..Float::INFINITY).lazy.take_while {|n|(n**2+ 1*n+41).prime?}.inject{|acc,n| n }
Note that you must use lazy otherwise inject will build an intermediate array.
Verifying
To see what happens if you don't use lazy, run the following after restarting ruby & running the non-lazy version. It will return arrays that "look like" the intermediate array.
ObjectSpace.enum_for(:each_object, Array).each_with_object([]) {|e, acc|
acc << e if e.size == 40 and e.first == 0
}
The non-lazy version will return:
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]]
Re-doing the test with lazy will return an empty array.
I was under the impression that set() would order a collection much like .sort()
However it seems that it doesn't, what was peculiar to me was why it reorders the collection.
>>> h = '321'
>>> set(h)
set(['1', '3', '2'])
>>> h
'321'
>>> h = '22311'
>>> set(h)
set(['1', '3', '2'])
why doesn't it return set(['1', '2', '3']). I also seems that no matter how many instances of each number I user or in what order I use them it always return set(['1', '3', '2']). Why?
Edit:
So I have read your answers and my counter to that is this.
>>> l = [1,2,3,3]
>>> set(l)
set([1, 2, 3])
>>> l = [3,3,2,3,1,1,3,2,3]
>>> set(l)
set([1, 2, 3])
Why does it order numbers and not strings?
Also
import random
l = []
for itr in xrange(101):
l.append(random.randint(1,101))
print set(l)
Outputs
>>>
set([1, 2, 4, 5, 6, 8, 10, 11, 12, 14, 15, 16, 18, 19, 23, 24, 25, 26, 29, 30, 31, 32, 34, 40, 43, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 57, 58, 59, 60, 61, 62, 63, 64, 66, 67, 69, 70, 74, 75, 77, 79, 80, 83, 84, 85, 87, 88, 89, 90, 93, 94, 96, 97, 99, 101])
python set is unordered, hence there is no guarantee that the elements would be ordered in the same way as you specify them
If you want a sorted output, then call sorted:
sorted(set(h))
Responding to your edit: it comes down to the implementation of set. In CPython, it boils down to two things:
1) the set will be sorted by hash (the __hash__ function) modulo a limit
2) the limit is generally the next largest power of 2
So let's look at the int case:
x=1
type(x) # int
x.__hash__() # 1
for ints, the hash equals the original value:
[x==x.__hash__() for x in xrange(1000)].count(False) # = 0
Hence, when all the values are ints, it will use the integer hash value and everything works smoothly.
for the string representations, the hashes dont work the same way:
x='1'
type(x)
# str
x.__hash__()
# 6272018864
To understand why the sort breaks for ['1','2','3'], look at those hash values:
[str(x).__hash__() for x in xrange(1,4)]
# [6272018864, 6400019251, 6528019634]
In our example, the mod value is 4 (3 elts, 2^1 = 2, 2^2 = 4) so
[str(x).__hash__()%4 for x in xrange(1,4)]
# [0, 3, 2]
[(str(x).__hash__()%4,str(x)) for x in xrange(1,4)]
# [(0, '1'), (3, '2'), (2, '3')]
Now if you sort this beast, you get the ordering that you see in set:
[y[1] for y in sorted([(str(x).__hash__()%4,str(x)) for x in xrange(1,4)])]
# ['1', '3', '2']
From the python documentation of the set type:
A set object is an unordered collection of distinct hashable objects.
This means that the set doesn't have a concept of the order of the elements in it. You should not be surprised when the elements are printed on your screen in an unusual order.
A set in Python tries to be a "set" in the mathematical sense of the term. No duplicates, and order shouldn't matter.
I am new to Mathematica and am trying to understand patterns and rules. So I tried the following:
A = {1, 2, 3, 4}
A //. {x_?EvenQ -> x/2, x_?OddQ -> 3 x + 1}
This is based on: http://en.wikipedia.org/wiki/Collatz_conjecture
This is supposed to converge, but what I got is:
ReplaceRepeated::rrlim: Exiting after {1,2,3,4} scanned 65536 times. >>
Please help me understand my error in the pattern/rule.
Regards
The way you wrote this, it does not terminate, so it eg ends up alternating between 1 and 4, 2 etc. (all recursive descriptions must eventually bottom out somewhere, and your does not include a case to do that at n=1).
This works:
ClearAll[collatz];
collatz[1] = 1;
collatz[n_ /; EvenQ[n]] := collatz[n/2]
collatz[n_ /; OddQ[n]] := collatz[3 n + 1]
although it does not give a list of the intermediate results. A convenient way to get them is
ClearAll[collatz];
collatz[1] = 1;
collatz[n_ /; EvenQ[n]] := (Sow[n]; collatz[n/2])
collatz[n_ /; OddQ[n]] := (Sow[n]; collatz[3 n + 1])
runcoll[n_] := Last#Last#Reap[collatz[n]]
runcoll[115]
(*
-> {115, 346, 173, 520, 260, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56,
28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1}
*)
or
colSeq[x_] := NestWhileList[
Which[
EvenQ[#], #/2,
True, 3*# + 1] &,
x,
# \[NotEqual] 1 &]
so that eg
colSeq[115]
(*
-> {115, 346, 173, 520, 260, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56,
28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1}
*)
By the way the fastest approach I could come up with (I think I needed it for some project Euler problem) was something like
Clear#collatz;
collatz[1] := {1}
collatz[n_] := collatz[n] = If[
EvenQ[n] && n > 0,
{n}~Join~collatz[n/2],
{n}~Join~collatz[3*n + 1]]
compare:
colSeq /# Range[20000]; // Timing
(*
-> {6.87047, Null}
*)
while
Block[{$RecursionLimit = \[Infinity]},
collatz /# Range[20000];] // Timing
(*
-> {0.54443, Null}
*)
(we need to increase the recursion limit to get this to run correctly).
You got the recursive cases right, but you have no base case to terminate the recursion which leads to infinite recursion (or until Mathematica hits the pattern replacement limit). If you stop when you reach 1, it works as expected:
In[1]:= A = {1,2,3,4}
Out[1]= {1,2,3,4}
In[2]:= A //. {x_?EvenQ /; x>1 -> x/2, x_?OddQ /; x>1 -> 3 x+1}
Out[2]= {1,1,1,1}
In the documentation center, the section about writing packages is illustrated with a Collatz function example.
Ruby's Array#sort will, by default, sort numbers like this, in order of their value:
[11, 12, 13, 112, 113, 124, 125, 127]
I'd like to sort an array of numbers like this, as though they were words being alphabetized:
[11, 112, 113, 12, 124, 125, 127, 13]
How can I do this? (Ultimately, I want to do this with Hash keys, so if you want to answer that way instead, that's fine.) Also, is there a name for this type of sort?
You are all crqzy ))) I have a such solution:
a.sort_by &:to_s
Well, one way is to convert all of the values to strings, then convert them back.
a = [11, 12, 13, 112, 113, 124, 125, 127]
a = a.map(&:to_s).sort.map(&:to_i)
p a # => [11, 112, 113, 12, 124, 125, 127, 13]
You can pass in a block to sort that accepts two arguments and returns the result of your own custom-defined comparison function. The example should speak for itself, but should you have any questions, feel free to ask.
a = [11, 112, 113, 12, 124, 125, 127, 13]
new_a = a.sort do |x,y|
"%{x}" <=> "%{y}"
end
puts new_a
A note: I suspect that the reason you're looking for this sort of solution is because the objects you want sorted are not Integers at heart. It might be worthwhile and semantically more pleasing to subclass Integer. Although it will obviously make instantiation harder, it feels more correct, at least to me.