My projects run so much slower in JetBrains DataSpell rather than Jupyter notebook. When I try to hyper-tune my parameters using sklearn it takes nearly 50 times longer than when on Jupyter notebook.
Does anyone have any solutions?
Related
Recently I've been trying to set up an environment in order to use GPU for Jupyter Notebook, and wrote a couple codes as following:
'''
pip install tensorflow-gpu
'''
'''
conda update -n base conda
'''
After all, I didn't get what I wanted, but instead it ruined my existing Jupyter notebook performance. It gave me a few messages regarding downloading and removing some of the packages that I don't know so well.
Now when I'm running kernel, it is relatively slow. It is somewhat bearable when I'm running simple codes on pandas or numpy, but I can definitely tell it has been slowed down when I'm running some libraries with sophisticated calculations, like sklearn. It took me about 3 to 5 mins for some of the codes before I did what I did, but it's now taking me around 30 mins to do the same process.
I've tried to reinstall Anaconda3 multiple times, but it's not taking me back to the default settings. Can someone help how to fix this, or may be just change my Jupyter Notebook settings(including packages) to when I first downloaded Anaconda3?.
I'm quite new to softwares so I'm not very familiar with certain terms and concepts. If someone can explain me the solution by process, my appreciation is yours.
Thank you
I am very new to Julia and am not a computer geek so this question may not be very clear. I am happy to add any information as necessary.
I am running Julia on VSCode in Windows. I recently added some memory sticks (128G to 256G) to the PC and found out that Julia was significantly slower. I tried several things including moving the positions of the memory sticks, reinstalling Julia and VSCode, disabling hyperthreading. (in this order). Nothing worked.
I then decided to install Windows Server on my computer, hoping that Julia would behave normally under the new system. It was still slow.
Could anyone give me some suggestions on what to do? Thanks!
I'm new to Julia, I have a file called "example.jl" and I want to open it in Jupyter. I added the Julia kernel to my jupyter kernels.
My question is:
Is there a terminal command like:
jupyter notebook blabla.ipynb [that I use to open my notebooks]
Which opens my "example.jl" script in my jupyter notebook with the right jl kernel?
I looked into many pages and couldn't find an answer.
P.S: What I do now is to open a notebook with jl kernel and copy the Julia script into it. But I'd like to know if there are more elegant ways to open .jl s.
Generally you need to create a new empty Jupyter notebook with Julia kernel and copy-paste your code there.
There is also a nice Julia implementation - Weave.jl. Since Jupyter's format is more complex, special code formatting is required (for hints see pictures at https://github.com/JunoLab/Weave.jl) - once it is done you can do the conversion in the following way:
convert_doc("examples/some_code.jl", "some_notebook.ipynb")
There are some other (usually Python-based) tools available, that under some circumstances can be used to split source code file into several Jupyter cells but again every time this assumes some specific code formatting.
P.S.
If you are looking for and IDE try Visual Studio Code with Julia extension which is great.
I noticed that under ubuntu ipython starts really quickly, almost as fast as python itself.
While under Windows or Mac, it starts really slow. Is there a way to make it better?
I am using Jupyter Lab on Mac to run tensorflow and wondering how can I clear all the previous graphs every time I rerun my code? I have a problem in overlaying all the previous graphs on tensorboard whenever I rerun my code.
Thanks
You should restart the kernel through the kernel tab on the top of the notebook.