CPU Environment:Intel(R) Xeon(R) Gold 6148 CPU # 2.40GHz
Fisrt,I install tensorflow with pip install tensorflow==1.12.0, and download tensorflow-benchmark
Run 1:export MKL_VERBOSE=0;export MKL_ENABLE_INSTRUCTIONS=AVX512;python tf_cnn_benchmarks.py --device=cpu --data_format=NHWC --model=alexnet --batch_size=8
Run 2:export MKL_VERBOSE=0;export MKL_ENABLE_INSTRUCTIONS=AVX2;python tf_cnn_benchmarks.py --device=cpu --data_format=NHWC --model=alexnet --batch_size=8
The speed almost same!!! I also change different model and batch_size.
Second, I also test caffe compile with mkl. I found that
MKL_ENABLE_INSTRUCTIONS=AVX512 does not work much than MKL_ENABLE_INSTRUCTIONS=AVX2.
Why?
I assume your intentions are to test TensorFlow accelerated with MKLDNN. Unlike traditional MKL lib, this lib features math accelerations just for DL operations. However, the terms MKL and MKLDNN are apparently used interchangeably in Intel-optimized-TensorFlow, although accelerated with Intel MKLDNN. So now to answer your question, MKLDNN lib don't support the functionality to control ISA dispatching as of yet.
By the way, pip install Tensorflow installs Google's official tensorflow lib that doesn't come with MKL accelerations. To get Intel-optimized-TensorFlow, Please refer to the install guide: https://software.intel.com/en-us/articles/intel-optimization-for-tensorflow-installation-guide. To check if MKLDNN is enabled in your build use the command export MKLDNN_VERSBOSE=1 instead on MKL_VERBOSE=1
Related
Conda wants to downgrade my blas, lapack etc. packages from an mkl to an openblas version. I understand that conda juggling with mkl versus openblas seems not an uncommon issue. Yet, I have not found a solution to do the job for me. I have these packages installed
blas 2.113 mkl conda-forge
blas-devel 3.9.0 13_linux64_mkl conda-forge
libblas 3.9.0 13_linux64_mkl conda-forge
libcblas 3.9.0 13_linux64_mkl conda-forge
liblapack 3.9.0 13_linux64_mkl conda-forge
liblapacke 3.9.0 13_linux64_mkl conda-forge
mkl 2022.0.1 h06a4308_117
mkl-devel 2022.0.1 h66538d2_117
mkl-include 2022.0.1 h06a4308_117
mkl-service 2.4.0 py39h404a4ab_0 conda-forge
mkl_fft 1.3.1 py39h6964271_2 conda-forge
mkl_random 1.2.2 py39h8b66066_1 conda-forge
and I have a .condarc (on linux) containing
channels:
- conda-forge
- defaults
dependencies:
- python>=3.6
- numpy>=1.13
- scipy>=0.18
- cython>=0.29
- mkl
- mkl-devel
- libblas=*=*mkl
- bottleneck
- pip
- setuptools>=30.3.0
- h5py
- pyyaml
- pytest
ssl_verify: true
auto_activate_base: false
Moreover in the conda-meta directories I have a pinned file, containing the line libblas=*=*mkl. Yet, upon conda update --all this is suggested:
The following packages will be DOWNGRADED:
... other pkgs ...
libblas 3.9.0-13_linux64_mkl --> 3.9.0-13_linux64_openblas
libcblas 3.9.0-13_linux64_mkl --> 3.9.0-13_linux64_openblas
liblapack 3.9.0-13_linux64_mkl --> 3.9.0-13_linux64_openblas
Why, despite of the the .condarc and pinned files, am I getting this switch from mkl to openblas, and what else can I do to prevent it?
I'm posting a quirky workaround for my own OP. This finally worked for me.
First, I could not convince conda to respect the channel ordering in .condarc nor the content in the pinned files before running the update by any other suggestion, including those found outside of stackoverflow .
Second, I stored a conda list | grep mkl, a conda list | grep intel, and a conda list | grep open away for later reference. Then I "gave in" and let the "upgrade" happen, running conda update --all. No need to mention that after that, my environment indeed showed the unwanted replacement of all mkl-type libraries with openblas stuff.
Third, and within the openblas-infested environment I "re-installed" mkl
conda install blas=*=*mkl
conda install libblas=*=*mkl
conda update numpy
conda update scipy
conda install intel-openmp # the "update" had also removed this ...
Also make sure that no openblas-stuff remains by doing a conda remove on whatever related package. (I'm not claiming that really all of the above commands are necessary to reach the original state of the environment regarding mkl. But that's what I did.)
Fourth, comparing with the reference notes from the preceding second step I checked that at this point my environment claimed to be back to "all-mkl". Moreover, using this extremely helpful site http://markus-beuckelmann.de/blog/boosting-numpy-blas.html I also checked, that this was indeed true regarding typical mkl timings to be expected.
On the side, there is a really weird and confusing issue, which may not be related to the OP but which I stumbled across in this context: On the WWW one finds many many many quotes stating, that for numpy or scipy actually "using" mkl, one has to have this kind of output
In []: numpy.show_config()
blas_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/home/abcd/efgh..../lib']
from numpy/scipy.show_confi(). This seems not true in general. In fact when one gets
In []: numpy.show_config()
blas_info:
libraries = ['cblas', 'blas', 'cblas', 'blas']
library_dirs = ['/home/abcd/efgh..../lib']
this is no cause for panic as long as in /home/abcd/efgh..../lib one finds everything linked as
liblapacke.so -> libmkl_rt.so.2*
libblas.so -> libmkl_rt.so.2*
...a.s.o.
which I do.
(conda is just soo painful. sigh)
The package management can be a mess whenever the Intel MKL package has been updated, but the numpy package has not. I may end up with Intel MKL being installed, but numpy is using the openblas. To get this working in my setup, I often do this when creating a new environment:
conda create -y --name test python=3 numpy mkl cmake blas=*=*mkl
so that the MKL is installed at the same time as numpy, and numpy will use the MKL. This often results in an older version of MKL, since numpy was linked against an older version of the MKL.
Without the blas=*=*mkl, I will often end up with an openblas based numpy installed with the newest version of the Intel MKL not being used by anything.
conda create -n test33 numpy mkl blas=*=*mkl
.
.
.
The following NEW packages will be INSTALLED:
blas pkgs/main/osx-64::blas-1.0-mkl
.
numpy pkgs/main/osx-64::numpy-1.21.5-py39h2e5f0a9_1
.
mkl pkgs/main/osx-64::mkl-2021.4.0-hecd8cb5_637
.
# results in:
blas_opt_info:
libraries = ['mkl_rt', 'pthread']
conda create -n test33 numpy mkl
.
.
.
The following NEW packages will be INSTALLED:
blas pkgs/main/osx-64::blas-1.0-openblas
.
numpy pkgs/main/osx-64::numpy-1.21.5-py39h9c3cb84_1
.
mkl pkgs/main/osx-64::mkl-2022.0.0-hecd8cb5_105
.
# results in:
blas_opt_info:
libraries = ['openblas', 'openblas']
Unfortunately, numpy does not appear to show the blas library it is linked against in its version id. Oftentimes, you will end up with both openblas and mkl in your environment.
See the note section at the end of https://conda-forge.org/docs/maintainer/knowledge_base.html#switching-blas-implementation
If you want to commit to a specific blas implementation, you can prevent conda from switching back by pinning the blas implementation in your environment. To commit to mkl, add blas=*=mkl to <conda-root>/envs/<env-name>/conda-meta/pinned, as described in the conda-docs.
When i use pydicom in python3.6, there are some problem:
import pydicom
import matplotlib.pyplot as plt
import os
import pylab
filePath = "/Users/zhuangrui/Documents/Python/Dicom/dicoms/zhang_bo/0001.dcm"
dataSet_1 = pydicom.dcmread(filePath)
plt.imshow(dataSet_1.pixel_array)
plt.show()
here is the problem:
How can this problem be solved? Thank you very much!
I've faced with the same problem, after doing some research on the suggested link above. I've managed to solve it by updating to the latest pydicom module "1.2.0" and installing gdcm. You can update the pydicom with
pip install -U git+https://github.com/pydicom/pydicom.git
You can find the latest gdcm here and this link explains the installation.
I use anaconda and it's easier to install the gdcm package and solve the problem. If you use anaconda
just type inside from your environment:
conda install pydicom --channel conda-forge to get pydicom's latest and
conda install -c conda-forge gdcm
to get the gdcm. This resolves the problem. Hope these will help.
With pydicom, you need an appropriate image handler also installed to handle compressed image types.
For JPEG lossless, in theory the following should work: jpeg_ls, gdcm, or Pillow with jpeg plugin. All of these also require Numpy to be installed. See the discussion at https://github.com/pydicom/pydicom/issues/532.
There is also a pull request in progress to add more descriptive error messages for what image handlers are needed for different images.
Problem:
I was trying to read medical images with .dcm extension. But was getting an error on Windows as well as on Ubuntu. I find a solution which will work on both the machined.
The error I got on Ubuntu is: NotImplementedError: this transfer syntax JPEG 2000 Image Compression (Lossless Only), can not be read because Pillow lacks the jpeg 2000 decoder plugin
(Note for Windows I was getting a different error but I am sure it's because of the same issue i.e. Pillow does not support JPEG 2000 format)
Platforma Information:
I am using: Python 3.6, Anaconda and Ubuntu, 15 GB RAM
RAM is important:
The solution I applied is the same as Ali explained above. But I want to add this installation may take time (depending on RAM you are using). On ubuntu where I am using 15 GB RAM on Cloud platform taken less time and on Windows on a local machine having 4 GB RAM taken a lot of time.
Solution
Anaconda is necessary. Why?
Please check the official doc of pydicom (https://pydicom.github.io/pydicom/dev/getting_started.html) its mentioned "To install pydicom along with image handlers for compressed pixel data, we encourage you to use Miniconda or Anaconda" (Note for Windows I was getting a different error)
If you are using Ubuntu directly open Terminal. If you are using Windows then on Anaconda Navigator go to Environment from here start terminal. Execute the following commands on it:
pip install -U git+https://github.com/pydicom/pydicom.git
conda install pydicom --channel conda-forge
conda install -c conda-forge gdcm
Cross Check:
Now use .dcm file for which we got the Error. Try to use the following code in Python notebook
filename = 'FileName.dcm'
ds = pydicom.dcmread(filename)
plt.imshow(ds.pixel_array, cmap=plt.cm.bone)
It should print the output. Also try this code:
ds.pixel_array
This will give you the array containing values.
I wanted to add this as a comment to this question - is multi-cpu supported by h2o-xgboost? - but apparently my rep is too low.
I am using the latest stable version of h2o (3.14.06).
In order to try and solve this problem i've made sure that gcc is built within my docker image (using apt-get install gcc)
dpkg -l | grep gcc
gcc 4:5.3.1-1ubuntu1 amd64 GNU C compiler
gcc-5 5.4.0-6ubuntu1~16.04.5 amd64 GNU C compiler
**output truncated**
Unfortunately when the cluster is spun up its still reporting:
INFO: Found XGBoost backend with library: xgboost4j
INFO: Your system supports only minimal version of XGBoost (no GPUs, no multithreading)!
Can anyone provide any insights? Clearly I'm missing a piece of the puzzle.
Right now H2O bundles only GPU-enabled and minimal (no GPU, no OMP) version of XGBoost. However, there is an experimental change in branch mm/xgb_upgrade which contains OMP-enabled version of XGBoost (instead of minimal version): https://github.com/h2oai/h2o-3/tree/mm/xgb_upgrade
Building the mm/xgb_upgrade works. Which jira ticket is referring to this issue?
I want to use the new platform Openshift 3 but I can't install lxml for Weblate with pip when build process is launch.
In logs the last line is "Running setup.py install for lxml" but no more error
How can I found what happened ?
Thanks
Some of the packages around data analytics when compiled with compiler optimisations can chew up too much memory and hit the default memory limit for builds. Try following steps outlined in:
Pandas on OpenShift v3
Is less likely, but just in case is the version of pip used, add a file .s2i/environment and in it add:
UPGRADE_PIP_TO_LATEST=1
This will ensure that latest version of pip is installed first. This can be required sometimes where a package provides a wheel file. Older version of pip used may ignore the binary wheel or get confused in other ways.
Thanks #Graham I followed this instruction Pandas on OpenShift v3 to edit YAML build configuration
resources:
limits:
memory: 1Gi
I am using ptxdist to create kernel and rootfs images for a Linux embedded system, running on an ARM Cortex A8 CPU.
I was trying to use a newer compiler (GCC 5+) and so was forced to upgrade several external packages that would not compile under the new GCC.
I compiled the following versions of Upstart and its immediate dependencies:
upstart: 1.13.2
libnih: 1.0.3
dbus: 1.11.2
json-c: 0.12.1
When I boot, I get the following message:
init: com.ubuntu.Upstart.c:3525: Assertion failed in control_emit_event_emitted: env != NULL
init: Caught abort, core dumped
Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000600
Searching online did not yield useful hints - the only relevant issue I found is this, but it is relevant to an older version of Upstart, and my libnih is of the correct version already.
According to comment #8 in the bug report you linked, it is not enough to use version 1.0.3 of libnih -- you have to specifically use the Ubuntu version, as this seems to include dbus fixes which could solve the problem you are seeing. From the bug report:
David Ireland (e-david) wrote on 2013-04-22: #7
I've built libnih
1.0.3 from source and also made sure that upstart builds with that version of the nih-dbus-tool. I'm still having this problem.
James Hunt (jamesodhunt) wrote on 2013-04-22: #8
Which problem? The
crash? If so, you are still using the wrong version of libnih: you
should be using the Ubuntu version (specifically 1.0.3-4ubuntu16) from
here:
https://code.launchpad.net/~ubuntu-branches/ubuntu/raring/libnih/raring
You do not need the --session flag to run a "Session Init" (yes, this
is a little confusing but --session was added for testing a long time
ago and is still required for that). A "Session Init" only requires
"--user".