I installed Python(x,y)-2.7.5.0 to run python programs on my Win8 Laptop. The Programs run on Linux, but when I use python(x,y) I get this error message:
D:\Python27\lib\site-packages\scipy\optimize\minpack.py:402: RuntimeWarning: Number of calls to function has reached maxfev = 2800.
warnings.warn(errors[info][0], RuntimeWarning)
The error occurs during a harmonic analysis at the "func= lambda..." part:
y = N.ravel(zon[:,z,k,:])
print k
func = lambda p,s,c,y: fitfunc(p,s,c) - y # Distance to the target function
print k
p1, success = optimize.leastsq(func, p0[:], args=(s,c,y))
I looked it up, where maxfev is defined, but I guess, it´s not a good idea to change it. My question is, whether the error is caused by a bug of python(x,y) or by my Windows 8 system. How can I search after the answer?
Does anybody else use Python(x,y)-2.7.5.0 with a win8 computer?
As suggested by Padraic Cunningham (python 64 package for windows), I installed Anaconda 64bit for windows. There are no problems anymore. Thus I think, one problem might have been the 32bit Version of pythonxy. Another aspect might be, that I didn´t use the latest version of pythonxy, but I´m not sure about that.
Related
enter image description here
#id first_training
#caption Results from the first training
# CLICK ME
from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
Due to IPython and Windows limitation, python multiprocessing isn't available now.
So number_workers is changed to 0 to avoid getting stuck
Hi, i am studying with Fastai book and i run this code without colab or paperspace.
But as not what i expected, it is taking so long time (my computer is workstation)
but i am wondering if i clear that error
maybe increasing 'number_workers', it would be much faster than before.
How to solve this problem?
Thanks
Windows 7 64 bits
GNU Fortran (GCC) 4.7.0 20111220 (experimental) --> The MinGW version installed with Anaconda3/Miniconda3 64 bits.
Hi all,
I'm trying to compile some Fortran code to be used from Python using F2Py. The full project is Solcore, in case anyone is interested. In Linux and MacOS everything works fine, the problem comes with Windows. After some effort I've nailed down the problem to the quadruple precision variables of my Fortran code, which are not being treated properly.
A minimum example that works perfectly well in Linux/MacOS but not in Windows is:
program foo
real*16 q, q2
q = 20
q2 = q+q
print*, q, q2
end program foo
In Linux/MacOS this prints, as expected:
20.0000000000000000000000000000000000 40.0000000000000000000000000000000000
However, in Windows I get:
2.00000000000000000000000000000000000E+0001 1.68105157155604675313133890866087630E-4932
Keeping aside the scientific notation, clearly this is not what I expected. The same result appear any time I try to do an operation with quadruple precision variables and I cannot figure out way.
This is not the same error already pointed out with quadruple precision variables in Fortran and the MinGW version included in Anaconda.
Any suggestion will be more than welcome. Please, keep in mind that, ultimately I need to make this work with F2Py, and MinGW included in Anaconda is the only way I have found in the end to make it work after reading many instructions and tutorials. Therefore, I would prefer to stick to it, if possible.
Many thanks,
Diego
I am running keras neural network training and prediction on GTX 1070 on Windows 10. Most times it is working, but from time to time it complains
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:359] could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:366] error retrieving driver version: Unimplemented: kernel reported driver version not implemented on Windows
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:326] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
F c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\kernels\conv_ops.cc:659] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms)
It cannot be explained neither by literally error meaning nor by OOM error.
How to fix?
Try limiting your gpu usage with set gpu option per_process_gpu_memory_fraction.
Fiddle around with it to see what works and what doesn't.
I recommend using .7 as a starting baseline.
I met the problem sometimes on Windows10 and Keras.
Reboot solve the problem for a short time, but happen again.
I refer to https://github.com/fchollet/keras/issues/1538
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.3
set_session(tf.Session(config=config))
the settings solve the halt problem.
Got the solution for this problem.
I had the same problem on Windows 10 with Nvidia GEforce 920M.
Search for the correct version of cudnn library. If the version is not compatable with the CUDA version it won't throw the error while tensorflow installation but will interfere during memory allocation in the GPU.
DO check your CUDA and CUDNN versions. Also follow the instructions about creation of sessions mentioned above.
Finally the issue is now resolved for me, I spent many hours struggling with this.
I recommend follow all the steps of installation properly as mentioned in
links
TensorFlow-
https://www.tensorflow.org/install/install_windows
and for CuDNN -
https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#install-windows
for me this wasn't enough, I tried updating my GeForce Game Ready Driver from GeForce Experience window, and after restart it started working for me.
GeForce Experience
the driver can also be downloaded from link https://www.geforce.com/drivers
Similar to what other people are saying, enabling memory growth for your GPUs can resolve this issue.
The following works for me by adding to the beginning of the training script:
# Using Tensorflow-2.4.x
import tensorflow as tf
try:
tf_gpus = tf.config.list_physical_devices('GPU')
for gpu in tf_gpus:
tf.config.experimental.set_memory_growth(gpu, True)
except:
pass
the tf doku help me a lot Allowing GPU memory growth
The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations: it starts out allocating very little memory, and as Sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process. Note that we do not release memory, since that can lead to even worse memory fragmentation. To turn this option on, set the option in the ConfigProto by:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)
or
with tf.Session(graph=graph_node, config=config) as sess:
...
The second method is the per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. For example, you can tell TensorFlow to only allocate 40% of the total memory of each GPU by:
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)
I'm running Clojure 1.2 on both my Snow Leopard OS X machine and my Ubuntu linux box via the lein repl command. I am going through the enlive tutorial https://github.com/swannodette/enlive-tutorial/
When I get to the "Third Scrape" tutorial and run this command:
(print-stories)
it works as expected on Ubuntu, but on OS X, it outputs only the first story and then outputs the rest only after I enter some expression, whether it is a number, a (println "hello world"), or whatnot. Something seems to be weird about the way the REPL is working on OS X -- as if the buffer is not flushing its output completely.
I notice that a doseq macro is used in the print-stories function. So if I do this:
tutorial.scrape3=> (doseq [x (map extract (stories))] (println x))
I get this output on OSX:
{:summary , :byline , :headline With Stones and Firebombs, Mubarak Allies Attack}
which is only the first item. If I then enter 0 (or any valid expression) and press return, I get the rest of the output:
0
{:summary The Conversation: Long, worthy road to democracy. , :byline , :headline }
{:summary The Frugal Traveler scores a cheap ticket to Malaga, Spain, birthplace of Picasso., :byline , :headline A Taste of Picasso (and Iberian Cuisine)}
{:summary Lay claim to the next great place: four emerging destinations., :byline , :headline Beat the Crowds}
[etc]
I also notice that this behavior is not consistent. Sometimes, nothing is output, and then I can flush it out by typing 0 or something and enter. Sometimes, it flushes out all the output properly.
Does anyone have any ideas?
As it happens I did the same enlive tutorial on snow leopard last night and the scrape3 (print-stories) function works fine for me. The doseq code in your question also works for me without stopping.
What output to you get if you run "lein version" at the command line? My version details are:
Leiningen 1.4.2 on Java 1.6.0_22 Java HotSpot(TM) 64-Bit Server VM
Cheers,
Colin
It's not an OSX issue, that happened to me on ubuntu 10.10 as well. Might be related to rlwrap, which is used by leiningen AFAIK. I use cake nowadays.
I'm working with an older version of OpenSSL, and I'm running into some behavior that has stumped me for days when trying to work with cross-platform code.
I have code that calls OpenSSL to sign something. My code is modeled after the code in ASN1_sign, which is found in a_sign.c in OpenSSL, which exhibits the same issues when I use it. Here is the relevant line of code (which is found and used exactly the same way in a_sign.c):
EVP_SignUpdate(&ctx,(unsigned char *)buf_in,inl);
ctx is a structure that OpenSSL uses, not relevant to this discussion
buf_in is a char* of the data that is to be signed
inl is the length of buf_in
EVP_SignUpdate can be called repeatedly in order to read in data to be signed before EVP_SignFinal is called to sign it.
Everything works fine when this code is used on Ubuntu and Windows 7, both of them produce the exact same signatures given the same inputs.
On OS X, if the size of inl is less than 64 (that is there are 64 bytes or less in buf_in), then it too produces the same signatures as Ubuntu and Windows. However, if the size of inl becomes greater than 64, it produces its own internally consistent signatures that differ from the other platforms. By internally consistent, I mean that the Mac will read the signatures and verify them as proper, while it will reject the signatures from Ubuntu and Windows, and vice versa.
I managed to fix this issue, and cause the same signatures to be created by changing that line above to the following, where it reads the buffer one byte at a time:
int input_it;
for(input_it = (int)buf_in; input_it < inl + (int)buf_in; intput_it++){
EVP_SIGNUpdate(&ctx, (unsigned char*) input_it, 1);
}
This causes OS X to reject its own signatures of data > 64 bytes as invalid, and I tracked down a similar line elsewhere for verifying signatures that needed to be broken up in an identical manner.
This fixes the signature creation and verification, but something is still going wrong, as I'm encountering other problems, and I really don't want to go traipsing (and modifying!) much deeper into OpenSSL.
Surely I'm doing something wrong, as I'm seeing the exact same issues when I use stock ASN1_sign. Is this an issue with the way that I compiled OpenSSL? For the life of me I can't figure it out. Can anyone educate me on what bone-headed mistake I must be making?
This is likely a bug in the MacOS implementation. I recommend you file a bug by sending the above text to the developers as described at http://www.openssl.org/support/faq.html#BUILD17
There are known issues with OpenSSL on the mac (you have to jump through a few hoops to ensure it links with the correct library instead of the system library). Did you compile it yourself? The PROBLEMS file in the distribution explains the details of the issue and suggests a few workarounds. (Or if you are running with shared libraries, double check that your DYLD_LIBRARY_PATH is correctly set). No guarantee, but this looks a likely place to start...
The most common issue porting Windows and Linux code around is default values of memory. I think Windows sets it to 0xDEADBEEF and Linux set's it to 0s.