Solving optimisation sub-instances in Parallel, using Pyomo ( Traceback ) - parallel-processing

I am trying to solve an energy model with Benders Decomposition.
In the model we are creating a master model and several sub models.
And I want to solve the sub models in parallel, and I saw an example here.
This is what I am using in the code:
from pyomo.opt.base import SolverFactory
from pyomo.opt.parallel import SolverManagerFactory
from pyomo.opt.parallel.manager import solve_all_instances
subs = []
for m in range(0, len(supportsteps)-1):
subs.append(urbs.create_model(data,
range(supportsteps[m], supportsteps[m+1]+1),
supportsteps, type=1))
solver_manager = SolverManagerFactory("pyro")
solve_all_instances(solver_manager, 'gurobi', subs)
Which gives an error:
Error Message
So what I am doing wrong?
Or, is it not possible to solve them in parallel?

The error message that you're seeing means that SolverManagerFactory("pyro") gave you None. It's possible that pyro isn't installed or on your PATH.
Try installing the Pyomo extras: conda install -c conda-forge pyomo.extras or pyomo install-extras

Related

how to resolve AttributeError: module 'graphviz.backend' has no attribute 'ENCODING'

I am not sure why I get an AttributeError: module 'graphviz.backend' has no attribute 'ENCODING' when I tried to export regression tree to graphviz. I tried re-installing graphviz and sklearn but it doesn't solve the problem. Appreciate any advice on this.
AttributeError Traceback (most recent call last)
<ipython-input-4-9d9e0becf9b6> in <module>
3 # graphviz is the drawing tool
4 from sklearn.tree import export_graphviz
----> 5 import graphviz
6 dot_data = export_graphviz(
7 model,
C:\ProgramData\Anaconda3\lib\site-packages\graphviz\__init__.py in <module>
25 """
26
---> 27 from .dot import Graph, Digraph
28 from .files import Source
29 from .lang import escape, nohtml
C:\ProgramData\Anaconda3\lib\site-packages\graphviz\dot.py in <module>
30
31 from . import backend
---> 32 from . import files
33 from . import lang
34
C:\ProgramData\Anaconda3\lib\site-packages\graphviz\files.py in <module>
20
21
---> 22 class Base(object):
23
24 _engine = 'dot'
C:\ProgramData\Anaconda3\lib\site-packages\graphviz\files.py in Base()
26 _format = 'pdf'
27
---> 28 _encoding = backend.ENCODING
29
30 #property
AttributeError: module 'graphviz.backend' has no attribute 'ENCODING'
I had a similar issue when using pipdeptree. It would seem that there was a very recent change to graphviz, intended to obfuscate its internals. Quoting the module author's reply in issue #149 (a similar issue with backend.FORMATS):
Submodules of graphviz are not part of the public API (cf. https://graphviz.readthedocs.io/en/stable/api.html). Please stick to the documented interface and use graphviz.FORMATS, see https://graphviz.readthedocs.io/en/stable/api.html#graphviz.FORMATS).
In the short term, you could downgrade your graphviz moduleā€¦ it looks like 0.18 was the last tag before the submodules were made opaque.
Moving forward, you may wish to create an issue and/or pull request against the sklearn-pandas repository, to replace graphviz.backend.FORMATS with graphviz.FORMATS, or even just cap its graphviz dependency at 0.18.
I had the same issue and I am very new to Python/conda world, so this might help newbies like me...
I downloaded graphviz 0.19.1 from:
https://pypi.org/project/graphviz/#files
Source Distribution: graphviz-0.19.1.zip (247.8 kB view hashes)
download link
and replaced graphviz folder with this version in "C:\Users\Nino\anaconda3\Lib\site-packages" (will be different for you) and rename it so that name is again graphviz.
"C:\Users\Nino\anaconda3\Lib\site-packages\graphviz"
I have met same problem! The most voted answer works for me! And paste the code to forceably downgrade the graphviz.
pip install --force-reinstall graphviz==0.18
I had the same error with python-graphviz==0.16. OP did not include a version number, but it looks like the line numbers in the traceback match with v0.16.
Note that the traceback shows the error to be inside the python-graphviz package, so it's more likely that it's an issue with a dependency.
With python-graphviz==0.19 I don't get the import error.
On a side note: Versions shown by conda list or pip list can be misleading. In case of doubt check the content of the __init__.py.
I solved this issue in a different way:
Open graphviz file on my PC through following path (Path may differ)
"C:\Users\Anoop\anaconda3\Lib\site-packages\graphviz\backend"
Copy the encoding.py file from here
Paste this file in the backend
"C:\Users\Anoop\anaconda3\Lib\site-packages\graphviz\backend"
Problem solved
In my case, it seems that the class Base in the "C:\ProgramData\Anaconda3\lib\site-packages\graphviz\files.py" takes the 'backend folder' instead of the 'backend.py' on init.
quick solve:
go to "C:\ProgramData\Anaconda3\lib\site-packages\graphviz" and rename the 'backend folder' to something else.
PS: since I didn't check the whole code, it may then cause another dependency problem.
Thanks! ver0.2 was trouble some and this error disappear after downgrading it with ver 0.19

Time module: Couldn't find a version that satisfies the requirement

I've tried to run this code:
from time import clock
def f2():
t1 = clock()
res = ' ' * 10**6
print('f2:', clock()-t1)
but got Traceback:
from time import clock
ImportError: cannot import name 'clock' from 'time' (unknown location)
Python doesn't see the time module in the standard library?
I tried to install this module manually via pip (Yes, I know that it should already be installed. But what else could I do?). I got the following error in response:
ERROR: Could not find a version that satisfies the requirement time
ERROR: No matching distribution found for time
Trying to install the module via PyCharm also failed - it just runs pip and gets the same error.
I found the answer.
Method clock() in module time was deprecated in 3.8, see issue 36895.
So i used time.time()
import time
def f2():
t1 = time.time()
res = ' ' * 10**8
t2 = time.time()
print('f2:', t2 - t1)
Strange, but while googling the problem, I noticed that many people in 2019-2021 (after clock() deprecated in Python 3.8) had this error, but no one wrote how to solve it.
So my answer might be really helpful.

How to parallelize a function written in cython (Jupyter Notebooks)

I am looking for ways to decrease the computational time of certain tasks. After checking different options, I found that joblib and cython worked best for me in a jupyter environment. However, even though I succeeded in implementing these approaches separately, I failed in combining them.
Below is an example of a sample code that gives the same error message (run in jupyter notebooks):
%load_ext Cython
%%cython
cpdef test(int item):
cdef int y = 0
cdef int i
for i in range(10):
y += item
return y
from joblib import Parallel, delayed
res = Parallel(n_jobs=-1)(delayed(test)(i) for i in range(20))
Error Message: BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
According to this documentation, the used variables should be picklable, so I am not sure of the reason behind this behavior.

Geopandas to_file() gives an error regarding fiona.drivers(). Is it possible to work around this?

I'm using geopandas to get WKT and coordinates from a database:
df = pandas.read_sql(con=conn2, sql=test_query)
df['Coordinates'] = df['WKT'].apply(lambda x: wkt.loads(x.read()))
gdf = geopandas.GeoDataFrame(df, geometry='Coordinates')
loc = r"...\Layers\geopandastest2.shp"
gdf.to_file(loc)
When I use to_file() it gives me the following error:
C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\geopandas\io\file.py:108: FionaDeprecationWarning: Use fiona.Env() instead.
with fiona.drivers():
Is it possible to get around this and force to_file() to use fiona.Env() or do I need to wait for geopandas to be updated?
Relevant geopandas github issue: https://github.com/geopandas/geopandas/issues/845
It is just a warning, your file should be saved anyway. It is already fixed in Geopandas Master (https://github.com/geopandas/geopandas/pull/854), which should be released soon.
You don't have to do anything about it now, it does not affect your script.

regarding using the pre trained im2txt model

I have followed every step from here https://edouardfouche.com/Fun-with-Tensorflow-im2txt/
but i get the following error
NotFoundError (see above for traceback): Tensor name "lstm/basic_lstm_cell/bias" not found in checkpoint files /home/asadmahmood72/Image_to_text/models/im2txt/model.ckpt-3000000
[[Node: save/RestoreV2_380 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_380/tensor_names, save/RestoreV2_380/shape_and_slices)]]
My os is UBUNTU 16.04
my tensorflow version is 1.2.0
This is a bit late, but hopefully this answer will help future people who encounter this problem.
Like Edouard mentioned, this error is caused because of a change in the Tensorflow API. If you want to use a more recent version of Tensorflow, there are a few ways I know of to "update" your checkpoint:
Use the official checkpoint_convert.py utility included in Tensorflow, or
Use this solution written by 0xDFDFDF on GitHub to rename the offending variables:
OLD_CHECKPOINT_FILE = "model.ckpt-1000000"
NEW_CHECKPOINT_FILE = "model2.ckpt-1000000"
import tensorflow as tf
vars_to_rename = {
"lstm/basic_lstm_cell/weights": "lstm/basic_lstm_cell/kernel",
"lstm/basic_lstm_cell/biases": "lstm/basic_lstm_cell/bias",
}
new_checkpoint_vars = {}
reader = tf.train.NewCheckpointReader(OLD_CHECKPOINT_FILE)
for old_name in reader.get_variable_to_shape_map():
if old_name in vars_to_rename:
new_name = vars_to_rename[old_name]
else:
new_name = old_name
new_checkpoint_vars[new_name] = tf.Variable(reader.get_tensor(old_name))
init = tf.global_variables_initializer()
saver = tf.train.Saver(new_checkpoint_vars)
with tf.Session() as sess:
sess.run(init)
saver.save(sess, NEW_CHECKPOINT_FILE)
I used option #2, and loading my checkpoint worked perfectly after that.
it looks like the tensorflow API changed again, which makes it incompatible with the checkpoint model. I was using tensorflow 0.12.1 in the article. Can you try with tensorflow 0.12.1 if it works? Otherwise you will have to train the model yourself (expensive) or find a checkpoint file that was generated with a more recent version of tensorflow...

Resources