Time module: Couldn't find a version that satisfies the requirement - pip

I've tried to run this code:
from time import clock
def f2():
t1 = clock()
res = ' ' * 10**6
print('f2:', clock()-t1)
but got Traceback:
from time import clock
ImportError: cannot import name 'clock' from 'time' (unknown location)
Python doesn't see the time module in the standard library?
I tried to install this module manually via pip (Yes, I know that it should already be installed. But what else could I do?). I got the following error in response:
ERROR: Could not find a version that satisfies the requirement time
ERROR: No matching distribution found for time
Trying to install the module via PyCharm also failed - it just runs pip and gets the same error.

I found the answer.
Method clock() in module time was deprecated in 3.8, see issue 36895.
So i used time.time()
import time
def f2():
t1 = time.time()
res = ' ' * 10**8
t2 = time.time()
print('f2:', t2 - t1)
Strange, but while googling the problem, I noticed that many people in 2019-2021 (after clock() deprecated in Python 3.8) had this error, but no one wrote how to solve it.
So my answer might be really helpful.

Related

PyCharm terminal freezes when using some modules

PyCharm terminal freezes (or loops) when executing the file and the computer starts to warm up.
Code that doesn't run:
import time
import requests
start_time = time.time()
print(start_time)
Code that works:
Code that doesn't run:
import time
# import requests
start_time = time.time()
print(start_time)
The problem occurs when some libraries are present, for example requests, pycoingecko, pymongo. I assume that these are the libraries that access the Internet.
The problem occurred unexpectedly, the settings in the program did not change.
Everything worked before.
System: Ubuntu.

How to parallelize a function written in cython (Jupyter Notebooks)

I am looking for ways to decrease the computational time of certain tasks. After checking different options, I found that joblib and cython worked best for me in a jupyter environment. However, even though I succeeded in implementing these approaches separately, I failed in combining them.
Below is an example of a sample code that gives the same error message (run in jupyter notebooks):
%load_ext Cython
%%cython
cpdef test(int item):
cdef int y = 0
cdef int i
for i in range(10):
y += item
return y
from joblib import Parallel, delayed
res = Parallel(n_jobs=-1)(delayed(test)(i) for i in range(20))
Error Message: BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
According to this documentation, the used variables should be picklable, so I am not sure of the reason behind this behavior.

Fuchsia OS fx set failing during build

After going through the installation steps mentioned at https://fuchsia.googlesource.com/fuchsia/+/master/docs/getting_started.md
I used the command fx set x64 which produced an error in the build/gn/preprocess_products.py file.
The error message was as shown below -
(base) xprilion#xl:~/fuchsia$ fx set x64
+ /home/xprilion/fuchsia/scripts/build-zircon.sh -v -g -t x64
+ /home/xprilion/fuchsia/zircon/prebuilt/downloads/gn gen /home/xprilion/fuchsia/out/build-zircon --root=/home/xprilion/fuchsia/zircon
Done. Made 12304 targets from 900 files in 3216ms
+ /home/xprilion/fuchsia/buildtools/gn gen /home/xprilion/fuchsia/out/x64 --check '--args=target_cpu="x64" import("//boards/x64.gni") import("//products/core.gni") if (!defined(available)) { available = [] } available+=[] if (!defined(preinstall)) { preinstall = [] } preinstall+=[] if (!defined(monolith)) { monolith = [] } monolith+=[]'
ERROR at //build/gn/packages.gni:71:26: Script returned non-zero exit code.
_preprocessed_products = exec_script("preprocess_products.py",
^----------
Current dir: /home/xprilion/fuchsia/out/x64/
Command: /usr/bin/env /home/xprilion/fuchsia/build/gn/preprocess_products.py --monolith=["garnet/packages/products/base", "garnet/packages/prod/drivers"] --preinstall=[] --available=["garnet/packages/prod/vboot_reference", "bundles/tools"]
Returned 1.
stderr:
Traceback (most recent call last):
File "/home/xprilion/fuchsia/build/gn/preprocess_products.py", line 11, in <module>
from prepreprocess_build_packages import PackageImportsResolver, PackageLabelObserver
File "/home/xprilion/fuchsia/build/gn/prepreprocess_build_packages.py", line 74
except IOError, e:
^
SyntaxError: invalid syntax
See //build/gn/BUILD.gn:7:1: whence it was imported.
import("//build/gn/packages.gni")
^-------------------------------
How to remove this error?
The answer to the above problem is simple - right now Python3.7 is not supported while building Fuchsia. I changed to Python3.6 and it worked! Python 2.7 works as well.
You are missing product information here. If you are not sure which product to chose then select core as the product at least which holds bringup component as well as minimal services needed for fuchsia to run.
$ fx set core.x64

Solving optimisation sub-instances in Parallel, using Pyomo ( Traceback )

I am trying to solve an energy model with Benders Decomposition.
In the model we are creating a master model and several sub models.
And I want to solve the sub models in parallel, and I saw an example here.
This is what I am using in the code:
from pyomo.opt.base import SolverFactory
from pyomo.opt.parallel import SolverManagerFactory
from pyomo.opt.parallel.manager import solve_all_instances
subs = []
for m in range(0, len(supportsteps)-1):
subs.append(urbs.create_model(data,
range(supportsteps[m], supportsteps[m+1]+1),
supportsteps, type=1))
solver_manager = SolverManagerFactory("pyro")
solve_all_instances(solver_manager, 'gurobi', subs)
Which gives an error:
Error Message
So what I am doing wrong?
Or, is it not possible to solve them in parallel?
The error message that you're seeing means that SolverManagerFactory("pyro") gave you None. It's possible that pyro isn't installed or on your PATH.
Try installing the Pyomo extras: conda install -c conda-forge pyomo.extras or pyomo install-extras

regarding using the pre trained im2txt model

I have followed every step from here https://edouardfouche.com/Fun-with-Tensorflow-im2txt/
but i get the following error
NotFoundError (see above for traceback): Tensor name "lstm/basic_lstm_cell/bias" not found in checkpoint files /home/asadmahmood72/Image_to_text/models/im2txt/model.ckpt-3000000
[[Node: save/RestoreV2_380 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_380/tensor_names, save/RestoreV2_380/shape_and_slices)]]
My os is UBUNTU 16.04
my tensorflow version is 1.2.0
This is a bit late, but hopefully this answer will help future people who encounter this problem.
Like Edouard mentioned, this error is caused because of a change in the Tensorflow API. If you want to use a more recent version of Tensorflow, there are a few ways I know of to "update" your checkpoint:
Use the official checkpoint_convert.py utility included in Tensorflow, or
Use this solution written by 0xDFDFDF on GitHub to rename the offending variables:
OLD_CHECKPOINT_FILE = "model.ckpt-1000000"
NEW_CHECKPOINT_FILE = "model2.ckpt-1000000"
import tensorflow as tf
vars_to_rename = {
"lstm/basic_lstm_cell/weights": "lstm/basic_lstm_cell/kernel",
"lstm/basic_lstm_cell/biases": "lstm/basic_lstm_cell/bias",
}
new_checkpoint_vars = {}
reader = tf.train.NewCheckpointReader(OLD_CHECKPOINT_FILE)
for old_name in reader.get_variable_to_shape_map():
if old_name in vars_to_rename:
new_name = vars_to_rename[old_name]
else:
new_name = old_name
new_checkpoint_vars[new_name] = tf.Variable(reader.get_tensor(old_name))
init = tf.global_variables_initializer()
saver = tf.train.Saver(new_checkpoint_vars)
with tf.Session() as sess:
sess.run(init)
saver.save(sess, NEW_CHECKPOINT_FILE)
I used option #2, and loading my checkpoint worked perfectly after that.
it looks like the tensorflow API changed again, which makes it incompatible with the checkpoint model. I was using tensorflow 0.12.1 in the article. Can you try with tensorflow 0.12.1 if it works? Otherwise you will have to train the model yourself (expensive) or find a checkpoint file that was generated with a more recent version of tensorflow...

Resources