I newly installed Anaconda (v.4.3.1) and used a conda command to install cling for a C++ kernel on Jupyter. I used conda install -c conda-forge cling=0.3.post
I am on Windows 8.1. I can't seem to find an answer on how to resolve this: I'm getting a 'Dead Kernel' error upon opening a notebook with any C++11, C++14 or C++17 kernel. I have not at all been able to use the c++ kernels after installation. Python3 kernel works completely fine. Below is a screenshot of the error I get.
Dead Kernel: Error Message Screenshot
For future errors: look at the console window that opens up when you start the notebook server. That will report Python exceptions.
For this case, I think you found what I recently discovered: cling's Jupyter kernel does not currently work on Windows. It uses the fcntl module to make pipes it uses for input/output non-blocking. fcntl only works with *nix operating systems. You'll have to wait until they change it up.
Related
I am following instructions to teach myself qiskit (Quantum computing developer Kit) from https://qiskit.org/documentation/getting_started.html, which requires Anaconda 3. For this learning exercise I plan to use a RPi4 running Ubuntu 21.X on it. I installed 64-Bit (AWS Graviton2 / ARM64) Installer (413 M). The installation hit a block when at the prompt to initialize conda , I get an error: line 477: 5128 Illegal instruction $PREFIX/bin/conda init which is further described as an open issue here on GitHub.
Would like to know if anyone have had success with Anaconda on RPi4b and even better, have been able to use qiskit on any OS. [I see the mambaforge / mini forge options but I am not sure qiskit is going to be compatible for conda versions provided by mamba/miniforge.]
Thank you.
You can't use Anaconda installer on RPi4 as it was compiled for AWS Graviton2 architecture.
Have you tried miniforge? It should work fine on RPi4
According to
https://www.tensorflow.org/install/install_mac Note: As of version 1.2, TensorFlow no longer provides GPU support on Mac OS X.
GPU support for OS X is no longer provided.
However, I would want to run an e-gpu setup like akitio node with a 1080 ti via thunderbolt 3.
What steps are required to get this setup to work?
So far I know that
disable SIP
run automate e-gpu script https://github.com/goalque/automate-eGPU
are required. What else is needed to get CUDA / tensorflow to work?
I wrote a little tutorial on compiling TensorFlow 1.2 with GPU support on macOS. I think it's customary to copy relevant parts to SO, so here it goes:
If you haven’t used a TensorFlow-GPU set-up before, I suggest first setting everything up with TensorFlow 1.0 or 1.1, where you can still do pip install tensorflow-gpu. Once you get that working, the CUDA set-up would also work if you’re compiling TensorFlow. If you have an external GPU, YellowPillow's answer (or mine) might help you get things set up.
Follow the official tutorial “Installing TensorFlow from Sources”, but obviously substitute git checkout r1.0 with git checkout r1.2.
When doing ./configure, pay attention to the Python library path: it sometimes suggests an incorrect one. I chose the default options in most cases, except for: Python library path, CUDA support and compute capacity. Don’t use Clang as the CUDA compiler: this will lead you to an error “Inconsistent crosstool configuration; no toolchain corresponding to 'local_darwin' found for cpu 'darwin'.”. Using /usr/bin/gcc as your compiler will actually use Clang that comes with macOS / XCode. Below is my full configuration.
TensorFlow 1.2 expects a C library called OpenMP, which is not available in the current Apple Clang. It should speed up multithreaded TensorFlow on multi-CPU machines, but it will also compile without it. We could try to build TensorFlow with gcc 4 (which I didn’t manage), or simply remove the line that includes OpenMP from the build file. In my case I commented out line 98 of tensorflow/third_party/gpus/cuda/BUILD.tpl, which contained linkopts = [“-lgomp”] (but the location of the line might obviously change). Some people had issues with zmuldefs, but I assume that was with earlier versions; thanks to udnaan for pointing out that it’s OK to comment out these lines.
I had some problems building with the latest bazel 0.5.3, so I reverted to using 0.4.5 that I already had installed. But some discussion in a github issue mentioned bazel 0.5.2 also didn’t have the problem.
Now build with bazel and finish the installation as instructed by the official install guide. On my 3.2 GHz iMac this took about 37 minutes.
Using python library path: /Users/m/code/3rd/conda/envs/p3gpu/lib/python3.6/site-packages
Do you wish to build TensorFlow with MKL support? [y/N] N
No MKL support will be enabled for TensorFlow
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N]
No VERBS support will be enabled for TensorFlow
Do you wish to build TensorFlow with OpenCL support? [y/N]
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Do you want to use clang as CUDA compiler? [y/N]
nvcc will be used as CUDA compiler
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]:
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify the cuDNN version you want to use. [Leave empty to use system default]:
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 6.1
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.
Configuration finished
Assuming that you have already setup your eGPU box and attached the TB3 cable from the eGPU to your TB3 port:
1. Download the automate-eGPU script and run it
curl -o ~/Desktop/automate-eGPU.sh
https://raw.githubusercontent.com/goalque/automate-eGPU/master/automate-eGPU.sh
&& chmod +x ~/Desktop/automate-eGPU.sh && cd ~/Desktop && sudo
./automate-eGPU.sh
You might get an error saying:
"Boot into recovery partition and type: csrutil disable"
All you need to do now is to restart your computer and when it's restarting hold down cmd + R to enable the recovery mode. Then locate the Terminal while in recovery mode and type in:
csrutil disable
Then restart your computer and re-run the automate-eGPU.sh script
2: Download and installing CUDA
CUDA: https://developer.nvidia.com/cuda-downloads
Run the cuda_8.0.61_mac.dmg file and follow through the installation phase. Then afterwards you will need to set the paths.
Go to your Terminal and type:
vim ~/.bash_profile
Or whether you have stored your environmental variables and then add these three lines:
export CUDA_HOME=/usr/local/cuda
export DYLD_LIBRARY_PATH="$CUDA_HOME/lib:$CUDA_HOME:$CUDA_HOME/extras/CUPTI/lib"
export LD_LIBRARY_PATH=$DYLD_LIBRARY_PATH
3. Downloading and installing cuDNN
cuDNN: https://developer.nvidia.com/cudnn
To download cuDNN is a bit more troublesome you have to sign up to be a developer for Nvidia and then afterwards you can download it. Make sure to download cuDNN v5.1 Library for OSX as it's the one that Tensorflow v1.1 expects Note that we can't use Tensorflow v1.2 as there is no GPU support for Macs :((
[![enter image description here][1]][1]
Now you will download a zip file called cudnn-8.0-osx-x64-v5.1.tgz, unzip and, which will create a file called cuda and cd to it using terminal. Assuming that the folder is in Downloads
Open terminal and type:
cd ~/Downloads/cuda
Now we need to copy cuDNN files to where CUDA is stored so:
sudo cp include/* /usr/local/cuda/include/
sudo cp lib/* /usr/local/cuda/lib/
4. Now install Tensorflow-GPU v1.1 in your conda/virtualenv
For me since I use conda I created a new environment using Terminal:
conda create -n egpu python=3
source activate egpu
pip install tensorflow-gpu # should install version 1.1
5. Verify that it works
First you have to restart your computer then:
In terminal type python and enter:
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
with tf.Session() as sess:
print (sess.run(c))
If you have a GPU this should run with no problem, if it does then you should get a stack trace (just a bunch of error messages) and it should include
Cannot assign a device to node 'MatMul': Could not satisfy explicit device specification '/device:GPU:0' because no devices matching that specification are registered in this process
If not then you're done congratz! I just got mine set up today and it's working perfectly :)
I could finally make it work with the following setup
Hardware
Nvidia Video Card: Titan Xp
EGPU: Akitio Node
MacBook Pro (Retina, 13-inch, Early 2015)
Apple Thunderbolt3 to Thunderbolt2 Adapter
Apple Thunderbolt2 Cable
Software versions
macOS Sierra Version 10.12.6
GPU Driver Version: 10.18.5 (378.05.05.25f01)
CUDA Driver Version: 8.0.61
cuDNN v5.1 (Jan 20, 2017), for CUDA 8.0: Need to register and download
tensorflow-gpu 1.0.0
Keras 2.0.8
I wrote a gist with the procedure:
https://gist.github.com/jganzabal/8e59e3b0f59642dd0b5f2e4de03c7687
Here is my solution to install an e-gpu on a mac. Tensorflow doesn't support tensorflow-gpu anymore, so there are definitely better approaches to get it working:
My configuration:
IMac 27' late 2012
Aktio Node
GTX 1080 ti
3 Screens: One of them connected to the GTX 1080 and the others directly plugged on the mac.
Advantages of windows bootcamp installation:
You can use pip to install tensorflow-gpu.
Good GPU 1080 ti support (Downloadable display driver)
Howto:
Install windows 10 with bootcamp. Do not connect the Akito node for the moment.
Download and install the display driver for your gpu from NVIDIA download page
Install Visual Studio
If you want to use CUDA 9.x you can install Visual Studio 2017
Otherwise install Visual Studio 2015
Install CUDA and CuDNN
Note that the tensorflow-gpu version must match with your Cuda and your CudNN version. See available tensorflow releases here.
After the CUDA installation you can move the unpacked CuDNN files to the CUDA folder at: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0. Move the lib files to the lib folder, the bin files to the bin folder and the include files to the include folder.
Install Python 3.5+
You need a 64-bit version to install tensorflow-gpu with pip
Python 2.7 won't work.
Install tensorflow with pip:
Command:
pip install tensorflow-gpu==1.5.0rc0
Check your installation
The display driver has been installed correctly when you can plug a screen to the GTX 1080 ti card.
Call C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe to check if your video card is available for CUDA.
Execute the following tensorflow command to see available devices:
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
Troubleshooting and hints:
Windows wants to update your GTX 1080 driver. Never allow that because you
won't be able to startup your computer again! A black screen with moving dots will appear before you can login to windows. Game over! Only use the display driver from NVIDIA download page.
If you cannot start windows on OSX anymore, press the alt key at startup to reinstall windows.
Ubuntu solution:
I couldn't find a working solution but here are some approaches:
It seems that my GTX 680 (iMac) and my GTX 1080 ti won't work together. Ubuntu could not be started anymore after installing the display driver via apt-get: Ubuntu not starting anmore. Try to download the official display driver from NVIDIA download page.
OSX Solution:
Tensorflow GPU is only supported up to tensorflow 1.1. I tried to install a newer version but couldn't build tensorflow-gpu with cuda support. Here are some approaches:
Install OSX Sierra to use the e-gpu script. High Sierra won't work (Jan, 13 2018). Downgrade to sierra by deleting all your partitions. Then press Command + R at startup to load the internet recovery. Don't forget to backup your data first.
Install e-gpu script.
If tensorflow-gpu 1.1 is enough for you, you can just install via pip, otherwise you need to build your pip with bazel.
Conclusion:
The windows installation is easier than OSX or Ubuntu installation because display drivers work properly and tensorflow and must not be build on your own. Always check the software version you use. The must match exactly.
I hope this will help you!
I have a running Python 2.7/3.4 installation on my Windows 7 (x64) machine. I would like to test curses on Windows.
Curses is installed but not working:
>>> import curses
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Tools\Python3.4.2\lib\curses\__init__.py", line 13, in <module>
from _curses import *
ImportError: No module named '_curses'
The documentation says:
The Windows version of Python doesn’t include the curses module. A ported version called UniCurses is available.
So, the Windows installer of Python 3.4 installed curses with unresolved dependencies. One could name this a bug...
OK, I looked into UniCurses. It's a wrapper for PDCurses:
UniCurses is a wrapper for Python 2.x/3.x that provides a unified set of Curses functions on all platforms (MS Windows, Linux, and Mac OS X) with syntax close to that of the original NCurses. To provide the Curses functionality on Microsoft Windows systems it wraps PDCurses.
Installing UniCurses via pip3 results in an error:
C:\Users\Paebbels>pip3 install UniCurses
Downloading/unpacking UniCurses
Could not find any downloads that satisfy the requirement UniCurses
Some externally hosted files were ignored (use --allow-external UniCurses to allow).
Cleaning up...
No distributions at all found for UniCurses
Storing debug log for failure in C:\Users\Paebbels\pip\pip.log
The link to SourceForge on Python's UniCurses site is dead. A manual search an SourceForge helped to find UniCurses for Python again.
But, the UniCurses 1.2 installer can not find any Python installation in my Windows registry. (Python 2.7.9 and Python 3.4.2 are available).
I also looked into Public Domain Curses (PDCurses). PD Cureses 3.4 is from late 2008. So it's 7 years old. I don't believe it will work either with Windows 7 nor Windows 8.1 or Windows 10.
Is there any way to get curses running on Windows with Python.
(The Windows Python, not the CygWin Python!)
You can use curses cross-platform (Windows, MacOS, GNU/Linux) if you install manually for Windows or like other package in others.
Install wheel package. If you need more info about wheel click here.
Go to this repository.
Download a package with your python version, in example for python 3.4:
curses-2.2-cp34-none-win_amd64.whl
Install it (this command if for windows, in GNU/Linux install like other package)
python -m pip install curses-2.2-cp34-none-win32.whl
Just include in your python script:
import curses
You can use curses wrapper for python. Works in Fedora 25 in all terminals, and Windows 10 using git bash, powershell, or cmd.
Update:
An alternative to curses in Windows here.
Console user interface in Windows here.
An interesting tutorial here.
Now we can easy install on python 3.7 using pip install windows-curses
You can try my mirror of unicurses, which includes pdcurses dlls. I have it currently up and running on windows 7 with python 3.5.0.
To quickly test if it works for you, just clone the repository and create and run a python script within its toplevel directory containing something like
from unicurses import *
stdscr = initscr()
addstr("hello world")
getch()
I know that by using virtual box, graphics card cannot be utilized by all the measures so I think it is not possible but I also think that coding cuda at least setting the CUDA developing environment is easier at Windows (unfortunately) thus if it is possible I plan to setup win8 to virtual box on my Ubuntu.
I do want to use win since I am at optimus Nvidia machine thus there is a driver problem at Ubuntu. In addition compilation of the code at Eclipse does not work due to that driver flaw. In case I use Win there might be the remedy of the problem.
Even if you get success in setting up environment in your virtual box to compile cuda code and you compiled cuda code there it will be of no use because you wont be able to run the code in virtual box.
Yes, your are absolutely right that installing drivers on optimus nvidia card is difficult task. I was also stuck with the same problem. but with release of cuda 5, installing cuda on Ubuntu is very simple.
follow these simple steps.
Driver installation ##
Download cuda 5 from here.(32bit or 64bit depending on OS)
Cuda 5 download
Install required tools by following command
sudo apt-get install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev
Next, blacklist the unnecessary modules
sudo gedit /etc/modprobe.d/blacklist.conf
add following lines at last
blacklist vga16fb
blacklist nouveau
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
and reboot your system.
After reboot press Ctrl+Alt+F1. Login there and enter following command.
service lightdm stop
Go to location where you have downloaded cuda 5. In my case its on desktop.
cd Desktop
make it run from shell
chmod +x cuda_5.3.35_linux*****
Run from terminal
./cuda_5.3.35_linux*****
accept it, when asked to install drivers press y and n for cudatoolkit and gpucomputingsdk
now reboot and you are done with driver installation.
To install cudatoolkit and gpucomputingsdk follow this link
Cuda 4.2 installation on Ubuntu
So i am lookin for some source code to crash the mac kernel. I found crashme for debain linux but that does not work for the mac kernel. So i was wondering if anyone knows where i can find a command-line utility or some source code to invoke the mac kernel panic? This would be a huge help, thanks.
Apple has a tech note about how to do this.
The short way to do it is with this command, sudo dtrace -w -n "BEGIN{ panic();}", run from the terminal.
Update 2020: As noted by Wei Shen in the comments, you'll need to disable SIP to make this work in modern versons of macOS.
I recently updated crashme to work on Mac OS X Lion. You will need to download the source code from http://crashme.codeplex.com/ and compile it using Xcode command line tools. More details are in a answer to question 5085136. But note that crashme hasn't found any immediate kernel panics on the Mac yet. However, after running crashme on the native MacBook Pro, and running it in VirtualBox VM's on the same machine, one in an x86 PC-BSD and another in an x64 Centos, my Lion Kernel became unhappy enough that it threw a kernel panic a few minutes later as I was editing a file using the native Emacs. So crashme may have stumbled upon a kernel bug.
Go in terminal and type "killall kernel_task" it should force the computer into a panic without downloading any software. Just make sure you have everything saved before you try :D
although this requires a password it works every time
sudo halt