I'm trying to run the Stable Diffusion Videos package and have installed the package and logged into Hugging Face. When I try to run the provided code from the package's GitHub page, I run into the ImportError: DLL load failed while importing _ufuncs: %1 is not a valid Win32 application. error. I have tried various solutions to this error but so far none of the solutions have worked.
I'm running Windows 11, 64 bit, and Python 3.10. I've read about missing dll files but am unsure how to find / install them to possibly fix this problem.
from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch
pipeline = StableDiffusionWalkPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
torch_dtype=torch.float16,
revision="fp16",
).to("cuda")
video_path = pipeline.walk(
prompts=['a cat', 'a dog'],
seeds=[42, 1337],
num_interpolation_steps=3,
height=512, # use multiples of 64 if > 512. Multiples of 8 if < 512.
width=512, # use multiples of 64 if > 512. Multiples of 8 if < 512.
output_dir='dreams', # Where images/videos will be saved
name='animals_test', # Subdirectory of output_dir where images/videos will be saved
guidance_scale=8.5, # Higher adheres to prompt more, lower lets model take the wheel
num_inference_steps=50, # Number of diffusion steps per image generated. 50 is good default
)
I've tried to run this code:
from time import clock
def f2():
t1 = clock()
res = ' ' * 10**6
print('f2:', clock()-t1)
but got Traceback:
from time import clock
ImportError: cannot import name 'clock' from 'time' (unknown location)
Python doesn't see the time module in the standard library?
I tried to install this module manually via pip (Yes, I know that it should already be installed. But what else could I do?). I got the following error in response:
ERROR: Could not find a version that satisfies the requirement time
ERROR: No matching distribution found for time
Trying to install the module via PyCharm also failed - it just runs pip and gets the same error.
I found the answer.
Method clock() in module time was deprecated in 3.8, see issue 36895.
So i used time.time()
import time
def f2():
t1 = time.time()
res = ' ' * 10**8
t2 = time.time()
print('f2:', t2 - t1)
Strange, but while googling the problem, I noticed that many people in 2019-2021 (after clock() deprecated in Python 3.8) had this error, but no one wrote how to solve it.
So my answer might be really helpful.
I am trying to fit some random data to a GP with the RBF kernel, using the GPy package. When I change the active dimensions, I get the LinAlgError: not positive definite, even with jitter error. This error is generated only with a conda environment. When I use pip, I have never run into this error. Has anyone come across this?
import numpy as np
import GPy
import random
def func(x):
return np.sum(np.power(x, 5) - np.power(x, 3))
# 20 random data with 10 dimensions
random.seed(2)
random_sample = [[random.uniform(0,3.4) for i in range(10)] for j in range(20)]
# get the first random sample as an observed data
y = np.array([func(random_sample[0])])
X = np.array([random_sample[0]])
y.shape = (1, 1)
X.shape = (1, 10)
# different set of dimensions
set_dim = [[np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])],
[np.array([0, 1]), np.array([2, 3]), np.array([4, 5]), np.array([6, 7]), np.array([8, 9])],
[np.array([0, 1, 2, 3, 4]), np.array([5, 6, 7, 8, 9])],
[np.array([0, 1, 2, 3]), np.array([4, 5, 6]), np.array([7, 8, 9])]]
for i in range(len(set_dim)):
# new kernel based on active dims
k = GPy.kern.Add([GPy.kern.RBF(input_dim=len(set_dim[i][x]), active_dims=set_dim[i][x]) for x in range(len(set_dim[i]))])
# increase data set with the next random sample
y = np.concatenate((y, np.array([[func(random_sample[i+1])]])))
X = np.concatenate((X, np.array([random_sample[i+1]])))
model = GPy.models.GPRegression(X, y, k)
model.optimize()
The output of conda list for gpy, scipy and numpy.
The paths of the above packages.
Possible Channel-Mixing Issue
Sometimes package builds from across different channels (e.g., anaconda versus conda-forge) are incompatible. The times I've encountered this, it happened when compiled symbols were referenced across packages, and the different build stacks used on the channels used different symbol names, leading to missing symbols when mixing.
I can report that using the exact same package versions as OP, but prioritizing the Conda Forge channel builds, gives me reliable behavior. While not conclusive, this would be consistent with the issue somehow coming from the mixing of the Conda Forge build of GPy with otherwise Anaconda builds of dependencies (e.g., numpy, scipy). Specifically suggestive is the fact that I have the exact same GPy build and that module is where the error originates. At the same time, there is nothing in the error that immediately suggests this is a channel mixing issue.
Workaround
In practice, I avoid channel mixing issues by always using YAML definitions to create my environments. This is a helpful practice because it encourages one to explicitly state the channel priority as part of the definition and it makes Conda aware of your preference from the outset. The following environment definition works for me:
gpy_cf.yaml
name: gpy_cf
channels:
- conda-forge
- defaults
dependencies:
- python=3.6
- gpy=1.9.6
- numpy=1.16.2
- scipy=1.2.1
and using
conda env create -f gpy_cf.yaml
conda activate gpy_cf
Unless you really do need these exact versions, I would remove whatever versioning constraints are unnecessary (at the very least remove the patches).
Broken Version
For the record, this is the version that I can replicate the error with:
gpy_mixed.yaml
name: gpy_mixed
channels:
- defaults
- conda-forge
dependencies:
- python=3.6
- conda-forge::gpy=1.9.6
- numpy=1.16.2
- scipy=1.2.1
In this case, we force gpy to come from Conda Forge and let everything else source from the Anaconda (defaults) channel, similar to the configuration found in OP.
I need to get the particular process cpu usage from task manager using python.
How can i achieve this?
You can use psutil library.
For getting process
import psutil
p = psutil.Process(<ProcessID>)
print(p.cpu_percent(interval=1.0))
This will give you the float representation of the current system-wide CPU utilization as a percentage by a process.
Also if you are having any trouble in retrieving any running process PID you can again use psutil library as :
import psutil
for proc in psutil.process_iter():
if proc.name() == <Some Running Process Name> :
try:
pinfo = proc.as_dict(attrs=['pid'])
except psutil.NoSuchProcess:
pass
else:
print(pinfo)
It will give you pid inside a dictionary thou.
I am trying to solve an energy model with Benders Decomposition.
In the model we are creating a master model and several sub models.
And I want to solve the sub models in parallel, and I saw an example here.
This is what I am using in the code:
from pyomo.opt.base import SolverFactory
from pyomo.opt.parallel import SolverManagerFactory
from pyomo.opt.parallel.manager import solve_all_instances
subs = []
for m in range(0, len(supportsteps)-1):
subs.append(urbs.create_model(data,
range(supportsteps[m], supportsteps[m+1]+1),
supportsteps, type=1))
solver_manager = SolverManagerFactory("pyro")
solve_all_instances(solver_manager, 'gurobi', subs)
Which gives an error:
Error Message
So what I am doing wrong?
Or, is it not possible to solve them in parallel?
The error message that you're seeing means that SolverManagerFactory("pyro") gave you None. It's possible that pyro isn't installed or on your PATH.
Try installing the Pyomo extras: conda install -c conda-forge pyomo.extras or pyomo install-extras