I want to add a hammingWeight of BitVec to a solver,but something wrong with it - z3py

I first added constraints to BitVec, and then add hamming weights of these BitVec to a constraint,but the constraint of hamming weight does't work
I put the solution into the constraints, the solution satisfies the constraints of the first part,but don't satisfies the constraint of hamming weight
from z3 import *
s=Solver()
def hammingWeight(x,n):
return sum(ZeroExt(n,Extract(i,i,x)) for i in range(n))
rounds=3
weight=3
x=[BitVec('x'+str(i),16) for i in range(rounds+1)]
y=[BitVec('y'+str(i),16) for i in range(rounds+1)]
z=[BitVec('z'+str(i),16) for i in range(rounds)]
hw=[BitVec('z'+str(i),16) for i in range(rounds)]
def round(r,x,y,z,hw):
varbits=RotateLeft(x[r],8)|RotateLeft(x[r],1)
doublebits=RotateLeft(x[r],1)&(~(RotateLeft(x[r],8))&RotateLeft(x[r],15))
y[r+1]=x[r]
x[r+1]=y[r]^z[r]^RotateLeft(x[r],2)
hw[r]=varbits^doublebits
s.add(z[r]&varbits==0,(z[r]^RotateLeft(z[r],7))&doublebits==0)
for r in range(rounds):
round(r,x,y,z,hw)
s.add(hammingWeight(x[0],16)+hammingWeight(y[0],16)!=0)
hw=0
for r in range(rounds):
hw+=hammingWeight(hw[r],16)
s.add(hw<=weight)
print(s.check())
print(s.model())
the solution find a model,but the model unsat in the constraint: hw<=weight

When I run your program, I get:
Traceback (most recent call last):
File "a.py", line 29, in <module>
hw+=hammingWeight(hw[r],16)
TypeError: 'int' object has no attribute '__getitem__'
This is because you've redefined hw from a bit-vector list and later assigned it to 0. Perhaps a cut-paste error? Please repost after fixing that first.

Related

Why in R "predict" function of Keras doesn't recognize correct input dimensions?

I'have implemented a keras lstm network in R and I can train it and use correctly the fit function. But when I use "predict" function, I meet following errors:
pred <- base_model %>% predict(matchsetarray)
WARNING:tensorflow:Model was constructed with shape (96, 2, 26) for input KerasTensor(type_spec=TensorSpec(shape=(96, 2, 26), dtype=tf.float32, name='lstm_3_input'), name='lstm_3_input', description="created by layer 'lstm_3_input'"), but it was called on an input with incompatible shape (32, 2, 26).
Error in py_call_impl(callable, dots$args, dots$keywords) :
ValueError: in user code:
File "C:\Users\Utente\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\engine\training.py", line 1801, in predict_function *
return step_function(self, iterator)
File "C:\Users\Utente\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\engine\training.py", line 1790, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Users\Utente\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\engine\training.py", line 1783, in run_step **
outputs = model.predict_step(data)
File "C:\Users\Utente\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\engine\training.py", line 1751, in predict_step
return self(x, training=False)
File "C:\Users\Utente\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\
However the dimension of predict input is (96,2,26), but "predict" consider the input "matchsetarray" of dimension (32, 2, 26). Could I force predict to read teh correct format?
I tryed to verify the dimension of input "matchsetarray":
dim(matchsetarray)
[1] 96 2 26
It's the correct dimension expected by "predict" function.

With ruamel.yaml how can I conditionally convert flow maps to block maps based on line length?

I'm working on a ruamel.yaml (v0.17.4) based YAML reformatter (using the RoundTrip variant to preserve comments).
I want to allow a mix of block- and flow-style maps, but in some cases, I want to convert a flow-style map to use block-style.
In particular, if the flow-style map would be longer than the max line length^, I want to convert that to a block-style map instead of wrapping the line somewhere in the middle of the flow-style map.
^ By "max line length" I mean the best_width that I configure by setting something like yaml.width = 120 where yaml is a ruamel.yaml.YAML instance.
What should I extend to achieve this? The emitter is where the line-length gets calculated so wrapping can occur, but I suspect that is too late to convert between block- and flow-style. I'm also concerned about losing comments when I switch the styles. Here are some possible extension points, can you give me a pointer on where I'm most likely to have success with this?
Emitter.expect_flow_mapping() probably too late for converting flow->block
Serializer.serialize_node() probably too late as it consults node.flow_style
RoundTripRepresenter.represent_mapping() maybe? but this has no idea about line length
I could also walk the data before calling yaml.dump(), but this has no idea about line length.
So, where should I and where can I adjust the flow_style whether a flow-style map would trigger line wrapping?
What I think the most accurate approach is when you encounter a flow-style mapping in the dumping process is to first try to emit it to a buffer and then get the length of the buffer and if that combined with the column that you are in, actually emit block-style.
Any attempt to guesstimate the length of the output without actually trying to write that part of a tree is going to be hard, if not impossible to do without doing the actual emit. Among other things the dumping process actually dumps scalars and reads them back to make sure no quoting needs to be forced (e.g. when you dump a string that reads back like a date). It also handles single key-value pairs in a list in a special way ( [1, a: 42, 3] instead of the more verbose [1, {a: 42}, 3]. So a simple calculation of the length of the scalars that are the keys and values and separating comma, colon and spaces is not going to be precise.
A different approach is to dump your data with a large line width and parse the output and make a set of line numbers for which the line is too long according to the width that you actually want to use. After loading that output back you can walk over the data structure recursively, inspect the .lc attribute to determine the line number on which a flow style mapping (or sequence) started and if that line number is in the set you built beforehand change the mapping to block style. If you have nested flow-style collections, you might have to repeat this process.
If you run the following, the initial dumped value for quote will be on one line.
The change_to_block method as presented changes all mappings/sequences that are too long
that are on one line.
import sys
import ruamel.yaml
yaml_str = """\
movie: bladerunner
quote: {[Batty, Roy]: [
I have seen things you people wouldn't believe.,
Attack ships on fire off the shoulder of Orion.,
I watched C-beams glitter in the dark near the Tannhäuser Gate.,
]}
"""
class Blockify:
def __init__(self, width, only_first=False, verbose=0):
self._width = width
self._yaml = None
self._only_first = only_first
self._verbose = verbose
#property
def yaml(self):
if self._yaml is None:
self._yaml = y = ruamel.yaml.YAML(typ=['rt', 'string'])
y.preserve_quotes = True
y.width = 2**16
return self._yaml
def __call__(self, d):
pass_nr = 0
changed = [True]
while changed[0]:
changed[0] = False
try:
s = self.yaml.dumps(d)
except AttributeError:
print("use 'pip install ruamel.yaml.string' to install plugin that gives 'dumps' to string")
sys.exit(1)
if self._verbose > 1:
print(s)
too_long = set()
max_ll = -1
for line_nr, line in enumerate(s.splitlines()):
if len(line) > self._width:
too_long.add(line_nr)
if len(line) > max_ll:
max_ll = len(line)
if self._verbose > 0:
print(f'pass: {pass_nr}, lines: {sorted(too_long)}, longest: {max_ll}')
sys.stdout.flush()
new_d = self.yaml.load(s)
self.change_to_block(new_d, too_long, changed, only_first=self._only_first)
d = new_d
pass_nr += 1
return d, s
#staticmethod
def change_to_block(d, too_long, changed, only_first):
if isinstance(d, dict):
if d.fa.flow_style() and d.lc.line in too_long:
d.fa.set_block_style()
changed[0] = True
return # don't convert nested flow styles, might not be necessary
# don't change keys if any value is changed
for v in d.values():
Blockify.change_to_block(v, too_long, changed, only_first)
if only_first and changed[0]:
return
if changed[0]: # don't change keys if value has changed
return
for k in d:
Blockify.change_to_block(k, too_long, changed, only_first)
if only_first and changed[0]:
return
if isinstance(d, (list, tuple)):
if d.fa.flow_style() and d.lc.line in too_long:
d.fa.set_block_style()
changed[0] = True
return # don't convert nested flow styles, might not be necessary
for elem in d:
Blockify.change_to_block(elem, too_long, changed, only_first)
if only_first and changed[0]:
return
blockify = Blockify(96, verbose=2) # set verbose to 0, to suppress progress output
yaml = ruamel.yaml.YAML(typ=['rt', 'string'])
data = yaml.load(yaml_str)
blockified_data, string_output = blockify(data)
print('-'*32, 'result:', '-'*32)
print(string_output) # string_output has no final newline
which gives:
movie: bladerunner
quote: {[Batty, Roy]: [I have seen things you people wouldn't believe., Attack ships on fire off the shoulder of Orion., I watched C-beams glitter in the dark near the Tannhäuser Gate.]}
pass: 0, lines: [1], longest: 186
movie: bladerunner
quote:
[Batty, Roy]: [I have seen things you people wouldn't believe., Attack ships on fire off the shoulder of Orion., I watched C-beams glitter in the dark near the Tannhäuser Gate.]
pass: 1, lines: [2], longest: 179
movie: bladerunner
quote:
[Batty, Roy]:
- I have seen things you people wouldn't believe.
- Attack ships on fire off the shoulder of Orion.
- I watched C-beams glitter in the dark near the Tannhäuser Gate.
pass: 2, lines: [], longest: 67
-------------------------------- result: --------------------------------
movie: bladerunner
quote:
[Batty, Roy]:
- I have seen things you people wouldn't believe.
- Attack ships on fire off the shoulder of Orion.
- I watched C-beams glitter in the dark near the Tannhäuser Gate.
Please note that when using ruamel.yaml<0.18 the sequence [Batty, Roy] never will be in block style
because the tuple subclass CommentedKeySeq does never get a line number attached.

Why can't we convert flat columns of awkward1 arrays `to_parquet`?

A follow up from this question; Best way to save a dict of awkward1 arrays?
To save multiple columns of nested awkward1 arrays (with varying length);
import awkward1 as ak
dog = ak.from_iter([[1, 2], [5]])
cat = ak.from_iter([[4]])
pets = ak.zip({"dog": dog[np.newaxis], "cat": cat[np.newaxis]}, depth_limit=1)
ak.to_parquet(pets, "pets.parquet")
Unfortunately, this doesn't seem to work for flat lists;
import awkward1 as ak
dog = ak.from_iter([1, 2, 5])
cat = ak.from_iter([4])
pets = ak.zip({"dog": dog[np.newaxis], "cat": cat[np.newaxis]}, depth_limit=1)
ak.to_parquet(pets, "pets.parquet")
creates the error;
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-31-7f3a7fefb261> in <module>
3 cat = ak.from_iter([3])
4 pets = ak.zip({"dog": dog[np.newaxis], "cat": cat[np.newaxis]}, depth_limit=1)
----> 5 ak.to_parquet(pets, "pets.parquet")
~/Programs/anaconda3/envs/tree/lib/python3.7/site-packages/awkward/operations/convert.py in to_parquet(array, where, explode_records, list_to32, string_to32, bytestring_to32, **options)
2983 layout = to_layout(array, allow_record=False, allow_other=False)
2984 iterator = batch_iterator(layout)
-> 2985 first = next(iterator)
2986
2987 if "schema" not in options:
~/Programs/anaconda3/envs/tree/lib/python3.7/site-packages/awkward/operations/convert.py in batch_iterator(layout)
2978 )
2979 yield pyarrow.RecordBatch.from_arrays(
-> 2980 pa_arrays, schema=pyarrow.schema(pa_fields)
2981 )
2982
~/Programs/anaconda3/envs/tree/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.RecordBatch.from_arrays()
TypeError: object of type 'pyarrow.lib.Tensor' has no len()
What is the reason for encountering this error?
What you found is a bug, and now it is fixed: https://github.com/scikit-hep/awkward-1.0/pull/799
What's happening here is that pyarrow can't write pyarrow.lib.Tensor (regular-length lists, such as the one you created with np.newaxis) to Parquet files. Parquet files don't have a concept of "regular-length list," so that makes sense. But rather than converting it, pyarrow hits an unhandled case, in which it fails to find the length of that pyarrow.lib.Tensor. (It's a little odd that pyarrow.lib.Tensor doesn't have a __len__ method, but that's another thing.)
Anyway, with version 1.2.0 of Awkward Array, we'll simply convert regular-length lists into (in principle) variable-length lists when writing to Parquet, since the format doesn't have that type. According to the schedule, version 1.2.0 will be released tomorrow. (This bug-fix is likely the last prerelease.)

How to calculate shap values for ADABoost model?

I am running 3 different model (Random forest, Gradient Boosting, Ada Boost) and a model ensemble based on these 3 models.
I managed to use SHAP for GB and RF but not for ADA with the following error:
Exception Traceback (most recent call last)
in engine
----> 1 explainer = shap.TreeExplainer(model,data = explain_data.head(1000), model_output= 'probability')
/home/cdsw/.local/lib/python3.6/site-packages/shap/explainers/tree.py in __init__(self, model, data, model_output, feature_perturbation, **deprecated_options)
110 self.feature_perturbation = feature_perturbation
111 self.expected_value = None
--> 112 self.model = TreeEnsemble(model, self.data, self.data_missing)
113
114 if feature_perturbation not in feature_perturbation_codes:
/home/cdsw/.local/lib/python3.6/site-packages/shap/explainers/tree.py in __init__(self, model, data, data_missing)
752 self.tree_output = "probability"
753 else:
--> 754 raise Exception("Model type not yet supported by TreeExplainer: " + str(type(model)))
755
756 # build a dense numpy version of all the tree objects
Exception: Model type not yet supported by TreeExplainer: <class 'sklearn.ensemble._weight_boosting.AdaBoostClassifier'>
I found this link on Git that state
TreeExplainer creates a TreeEnsemble object from whatever model type we are trying to explain, and then works with that downstream. So all you would need to do is and add another if statement in the
TreeEnsemble constructor similar to the one for gradient boosting
But I really don't know how to implement it since I quite new to this.
I had the same problem and what I did, was to modify the file in the git you are commenting.
In my case I use windows so the file is in C:\Users\my_user\AppData\Local\Continuum\anaconda3\Lib\site-packages\shap\explainers but you can do double click over the error message and the file will be opened.
The next step is to add another elif as the answer of the git help says. In my case I did it from the line 404 as following:
1) Modify the source code.
...
self.objective = objective_name_map.get(model.criterion, None)
self.tree_output = "probability"
elif str(type(model)).endswith("sklearn.ensemble.weight_boosting.AdaBoostClassifier'>"): #From this line I have modified the code
scaling = 1.0 / len(model.estimators_) # output is average of trees
self.trees = [Tree(e.tree_, normalize=True, scaling=scaling) for e in model.estimators_]
self.objective = objective_name_map.get(model.base_estimator_.criterion, None) #This line is done to get the decision criteria, for example gini.
self.tree_output = "probability" #This is the last line I added
elif str(type(model)).endswith("sklearn.ensemble.forest.ExtraTreesClassifier'>"): # TODO: add unit test for this case
scaling = 1.0 / len(model.estimators_) # output is average of trees
self.trees = [Tree(e.tree_, normalize=True, scaling=scaling) for e in model.estimators_]
...
Note in the other models, the code of shap needs the attribute 'criterion' that the AdaBoost classifier doesn't have in a direct way. So in this case this attribute is obtained from the "weak" classifiers with the AdaBoost has been trained, that's why I add model.base_estimator_.criterion .
Finally you have to import the library again, train your model and get the shap values. I leave an example:
2) Import again the library and try:
from sklearn import datasets
from sklearn.ensemble import AdaBoostClassifier
import shap
# import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
ADABoost_model = AdaBoostClassifier()
ADABoost_model.fit(X, y)
shap_values = shap.TreeExplainer(ADABoost_model).shap_values(X)
shap.summary_plot(shap_values, X, plot_type="bar")
Which generates the following:
3) Get your new results:
It seems that the shap package has been updated and still does not contain the AdaBoostClassifier. Based on the previous answer, I've modified the previous answer to work with the shap/explainers/tree.py file in lines 598-610
### Added AdaBoostClassifier based on the outdated StackOverflow response and Github issue here
### https://stackoverflow.com/questions/60433389/how-to-calculate-shap-values-for-adaboost-model/61108156#61108156
### https://github.com/slundberg/shap/issues/335
elif safe_isinstance(model, ["sklearn.ensemble.AdaBoostClassifier", "sklearn.ensemble._weighted_boosting.AdaBoostClassifier"]):
assert hasattr(model, "estimators_"), "Model has no `estimators_`! Have you called `model.fit`?"
self.internal_dtype = model.estimators_[0].tree_.value.dtype.type
self.input_dtype = np.float32
scaling = 1.0 / len(model.estimators_) # output is average of trees
self.trees = [Tree(e.tree_, normalize=True, scaling=scaling) for e in model.estimators_]
self.objective = objective_name_map.get(model.base_estimator_.criterion, None) #This line is done to get the decision criteria, for example gini.
self.tree_output = "probability" #This is the last line added
Also working on testing to add this to the package :)

MethodError: no method matching parseNLExpr_runtime

I have an error for the following code in Julia for solving NLP problem.
using JuMP
using Ipopt
using DataFrames,ConditionalJuMP
m = Model(solver=IpoptSolver())
#importing data
xi=[-1.016713795 0.804447648 0.932137661 1.064136698 -0.963217531
-1.048396778 1.076371484 1.099027177 1.061556926 -0.95185481
-0.980261302 0.271253807 0.184946729 1.062838197 -0.958794505
-0.980703191 0.278820569 0.231132677 1.062967459 -0.959302488
-0.953074503 -0.00768112 0.128808175 1.067743779 -0.978524489
-1.014866259 0.815325963 1.065956208 1.067059122 -0.974682291
-0.995088119 0.550359991 0.845087207 1.066556784 -0.973154887]
xj=xi
pii=[-300
-259.6530828
-284.3708955
-291.3387625
-342.4479859
-356.5031603
-351.0154738]
sample_size=7
bus_num=5
tempsum=0
#define variables
#variable(m,aij[1:2*bus_num,1:2*bus_num]) # define matrix by
2bus_num*2bus_num
for n=1:sample_size
for i=1:bus_num
for j=1:bus_num
tempsum=(aij[i,j]*xi[n,i]*xj[n,j]-pii[n,2])^2+tempsum
end
end
end
#define constraints
#constraint(m, [i=1:2*bus_num,j=2*bus_num],aij[i,j]==aij[j,i])
#constraint(m, [i=1:2*bus_num,j=2*bus_num],aij[i,j]*aij[i,j]<=aij
[i,i]*aij[j,j])
#constraint(m, [i=1:2*bus_num,j=2*bus_num],aij[i,i]*aij[j,j]<=0.25*
(aij[i,i]+aij[j,j])*(aij[i,i]+aij[j,j]))
#define NLP objective function
#NLobjective(m, Min, tempsum)
solve(m)
println("m = ",getobjectivevalue(m))
println("Aij = ", getvalue(aij))
but after operation, an error was shown as below.
MethodError: no method matching parseNLExpr_runtime(::JuMP.Model,
::JuMP.GenericQuadExpr{Float64,JuMP.Variable},
::Array{ReverseDiffSparse.NodeData,1}, ::Int64, ::Array{Float64,1})
The error occurs on #NLobjective(m, Min, tempsum), but I dont know how to modify the code. May you please anyone help me ?
Besides, I am also confused by how to add positive definite contraints here for the solution Matrix aij, as I want to get a positive definite matrix aij.
#NLobjective(m, Min, tempsum) the tempsum in this code must be a function which is registered by Jump.register(***).

Resources