I'm trying to setup a scala coding environment on jupyter notebook.Java is pre-installed and my command prompt is using admin permissions.when running code on the notebook in the desired scala environment I get "Intitializing Scala interpreter ..."
I have scoured answers on multiple forums and the answers point to either cassandra conncetor env variable or SPARK_HOME .
Note that I have my jupyter notebook setup to create new files on the desktop but i don't remember how i got to that config file.
Q:how do i resolve this dependency and setup the environment variable necessary so scala works ?
checked this and used these commands already from
How do I install Scala in Jupyter IPython Notebook?
pip install spylon-kernel
python -m spylon_kernel install
jupyter notebook
My path environment variable current contents picture
> scala_intp = initialize_scala_interpreter() File
> "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_interpreter.py",
> line 163, in initialize_scala_interpreter
> spark_session, spark_jvm_helpers, spark_jvm_proc = init_spark() File
> "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_interpreter.py",
> line 71, in init_spark
> conf._init_spark() File "C:\ProgramData\Anaconda3\lib\site-packages\spylon\spark\launcher.py",
> line 479, in _init_spark
> findspark.init(spark_home=spark_home, edit_rc=False, edit_profile=False, python_path=python_path) File
> "C:\ProgramData\Anaconda3\lib\site-packages\findspark.py", line 143,
> in init
> spark_home = find() File "C:\ProgramData\Anaconda3\lib\site-packages\findspark.py", line 46, in
> find
> raise ValueError( ValueError: Couldn't find Spark, make sure SPARK_HOME env is set or Spark is in an expected location (e.g. from
> homebrew installation).
> scala_intp = initialize_scala_interpreter() File "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_interpreter.py",
> line 163, in initialize_scala_interpreter
> spark_session, spark_jvm_helpers, spark_jvm_proc = init_spark() File
> "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_interpreter.py",
> line 71, in init_spark
> conf._init_spark() File "C:\ProgramData\Anaconda3\lib\site-packages\spylon\spark\launcher.py",
> line 479, in _init_spark
> findspark.init(spark_home=spark_home, edit_rc=False, edit_profile=False, python_path=python_path) File
> "C:\ProgramData\Anaconda3\lib\site-packages\findspark.py", line 143,
> in init
> spark_home = find() File "C:\ProgramData\Anaconda3\lib\site-packages\findspark.py", line 46, in
> find
> raise ValueError( ValueError: Couldn't find Spark, make sure SPARK_HOME env is set or Spark is in an expected location (e.g. from
> homebrew installation).
My current environment variable PATH contents
Related
when i install a package in old venv,as below:
(conda_venv)my-ubuntu:~/file$: conda install random
Collecting package metadata (current_repodata.json):failed
Traceback (most recent call last):
File"/home/my/anaconda3/lib/python3.9/site-packages/conda/exceptions.py",line 1214, in print_unexpected_error_report message builder.append(get main info str(error report['conda info']))
File"/home/my/anaconda3/ib/python3.9/site-packages/conda/cli/main_info.py", line 237, in get_main_info_str
info dict[' '+ key] = ('\n’ + 26 *'').join(info dict[key])
KeyError: 'pkgs_dirs'
and
enviroment variables:
conda info could not be constucted.
KeyError('pkgs_dirs')
and when i create a new venv,it shows the same error as above.
I found someone had the same prolem like me and they used "conda config --show-sources" to resolve,and i tried,but i got nothing but a blank line.
And i also tried "conda info",it showed
File"/home/my/anaconda3/lib/python3.9/site-packages/conda/exceptions.py", line 1082, in __call return func(*args,**kwargs)
File "/home/my/anaconda3/lib/python3.9/site-packages/conda/cli/main.py", line 87, in _main exit code = do call(args,p)
...
...
File "/home/my/anaconda3/lib/python3.9/site-packages/conda/ vendor/distro.py". line 599. in init
self. lsb release info = self. get lsb release info() \
subprocess.CalledProcessError:Command 'lsb_release -a' returned non-zero exit status 126
i don't know how to handle with this problem.I will appreciated it if any one can help.Thanks in advance!
Issues with KeyError can result from malformed configuration files. Please check your .condarc files are not corrupted (should be valid YAML). These can be located in three locations:
user home (~/.condarc)
Conda environment prefix (e.g., /home/my/anaconda3/.condarc, for OP)
working directory
It also should be noted that it appears to be looking for a key pkgs dirs rather than the standard pkgs_dirs. Perhaps someone ran a conda config --set 'pkgs dirs' '/path/to/blah' by mistake?
I have been working on creating custom environments using conda for new projects on my local machine (mac os). Within a project directory I created a new environment using yml file with:
$ conda env create --prefix ./env --file environment.yml
then:
$ conda activate ./env
I am now getting a conda error report every time I open a terminal and I do not understand what is going on. If anyone has any insight on what I broke and how to fix it would be greatly appreciated. I hope I have included enough information here to understand the problem. Here is the error output from terminal:
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "/Users/jrudd/opt/anaconda3/lib/python3.8/site-packages/conda/cli/main.py", line 140, in main
return activator_main()
File "/Users/jrudd/opt/anaconda3/lib/python3.8/site-packages/conda/activate.py", line 1210, in main
print(activator.execute(), end='')
File "/Users/jrudd/opt/anaconda3/lib/python3.8/site-packages/conda/activate.py", line 178, in execute
return getattr(self, self.command)()
File "/Users/jrudd/opt/anaconda3/lib/python3.8/site-packages/conda/activate.py", line 152, in activate
builder_result = self.build_activate(self.env_name_or_prefix)
File "/Users/jrudd/opt/anaconda3/lib/python3.8/site-packages/conda/activate.py", line 300, in build_activate
return self._build_activate_stack(env_name_or_prefix, False)
File "/Users/jrudd/opt/anaconda3/lib/python3.8/site-packages/conda/activate.py", line 326, in _build_activate_stack
conda_prompt_modifier = self._prompt_modifier(prefix, conda_default_env)
File "/Users/jrudd/opt/anaconda3/lib/python3.8/site-packages/conda/activate.py", line 691, in _prompt_modifier
return context.env_prompt.format(
KeyError: 'ds-basic'
`$ /Users/jrudd/opt/anaconda3/bin/conda shell.posix activate base`
environment variables:
CIO_TEST=<not set>
CONDA_EXE=/Users/jrudd/opt/anaconda3/bin/conda
CONDA_PYTHON_EXE=/Users/jrudd/opt/anaconda3/bin/python
CONDA_ROOT=/Users/jrudd/opt/anaconda3
CONDA_SHLVL=0
CURL_CA_BUNDLE=<not set>
PATH=/Users/jrudd/opt/anaconda3/bin:/Users/jrudd/opt/anaconda3/condabin:/us
r/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin
REQUESTS_CA_BUNDLE=<not set>
SSL_CERT_FILE=<not set>
active environment : None
shell level : 0
user config file : /Users/jrudd/.condarc
populated config files : /Users/jrudd/.condarc
conda version : 4.9.2
conda-build version : 3.18.11
python version : 3.8.3.final.0
virtual packages : __osx=10.16=0
__unix=0=0
__archspec=1=x86_64
base environment : /Users/jrudd/opt/anaconda3 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/osx-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/osx-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /Users/jrudd/opt/anaconda3/pkgs
/Users/jrudd/.conda/pkgs
envs directories : /Users/jrudd/opt/anaconda3/envs
/Users/jrudd/.conda/envs
platform : osx-64
user-agent : conda/4.9.2 requests/2.24.0 CPython/3.8.3 Darwin/20.3.0 OSX/10.16
UID:GID : 501:20
netrc file : /Users/jrudd/.netrc
offline mode : False
An unexpected error has occurred. Conda has prepared the above report.
If submitted, this report will be used by core maintainers to improve
future releases of conda.
Would you like conda to send this report to the core maintainers?
Thanks!
So, I found out that I had screwed up the .condarc config file by changing the base env_prompt to ({ds-basic}). For context, ds-basic is the name of this project environment. Don't ask me why or how I did this because I really have no idea. Was probably something I did during coffee induced haze. I found the .condarc file and edited out the offending code. It had actually broke conda completely and I couldn't even activate the base environment. Once I fixed the .condarc file all was well.
Geia!
I am facing a FileNotFound error on pycharm when runninng this code:
import findspark
findspark.init("C:\\Users\\user\\spark-2.3.0-bin-hadoop2.7")
from pyspark import SparkConf
from pyspark.sql import SparkSession
conf = SparkConf().setAppName('Fresh-Fish')
spark = SparkSession.builder.config(conf=conf).getOrCreate()
I've tried several proposed ways to resolve it but no good luck. I am using Windows 8.1 Pro.
Traceback (most recent call last):
File "C:/Users/user/PycharmProjects/spark-project/spark1.py", line 8, in <module>
spark = SparkSession.builder.config(conf=conf).getOrCreate()
File "C:\Users\user\spark-2.3.0-bin-hadoop2.7\python\pyspark\sql\session.py", line 173, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File "C:\Users\user\spark-2.3.0-bin-hadoop2.7\python\pyspark\context.py", line 331, in getOrCreate
SparkContext(conf=conf or SparkConf())
File "C:\Users\user\spark-2.3.0-bin-hadoop2.7\python\pyspark\context.py", line 115, in __init__
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File "C:\Users\user\spark-2.3.0-bin-hadoop2.7\python\pyspark\context.py", line 280, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "C:\Users\user\spark-2.3.0-bin-hadoop2.7\python\pyspark\java_gateway.py", line 80, in launch_gateway
proc = Popen(command, stdin=PIPE, env=env)
File "C:\Python27\lib\subprocess.py", line 390, in __init__
errread, errwrite)
File "C:\Python27\lib\subprocess.py", line 640, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
Process finished with exit code 1
My system variables are:
SPARK_HOME = C:\Users\user\spark-2.3.0-bin-hadoop2.7
HADOOP_HOME = C:\hadoop #hadoop folder contains bin folder and bin folder contains winutils.exe
PATH = C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files\Java\jre1.8.0_171\bin;C:\Python27;%SPARK_HOME%\bin;%JAVA_HOME%\bin;%HADOOP_HOME%\bin
JAVA_HOME = C:\Program Files\Java\jdk1.8.0_161
I have also tried to point python.exe from a new variable on its own like:
PYSPARK_HOME = C:\Python27
Or even add specific paths to Pycharm Project Interpreter pointing to:
C:\Users\user\spark-2.3.0-bin-hadoop2.7\python
"C:\Users\user\spark-2.3.0-bin-hadoop2.7\python\lib\py4j-0.10.6-src.zip"
but it didn't work
Two screenshots from Pycharm:
Project Structure Pycharm
Project Interpreter Pycharm
If you need more details I an happy to provide. I've been struggling for quite some days, any new ideas are very welcome!
I would like to set a breakpoint in a PyDev script written in Jython. I've tried various configurations:
(1) Setting a breakpoint in the Eclipse editor but nothing happens.
(2 & 3) Forcing a trace by adding following code into the script:
import template_helper
if False:
py_context_type = org.python.pydev.editor.templates.PyContextType
def MyFunc(context):
# option (2) - try pydevd with another eclipse session hosting debug server
#import sys
#sys.path.append(r"... pydev.core_6.3.3.201805051638\pysrc")
#import pydevd; pydevd.settrace()
# option (3) - try pdb
import pdb; pdb.set_trace()
return "some text"
template_helper.AddTemplateVariable(py_context_type, 'mysub', 'A desc', MyFunc)
Trying pydevd (option 2) just crashes with an exception added to error_log along lines of:
Caused by: Traceback (most recent call last):
File "...\org.python.pydev.jython_6.3.3.201805051638\jysrc\template_helper.py", line 20, in resolveAll
ret = self._callable(context)
File "...\pydev_scripts\src\pytemplate_local.py", line 12, in MyFunc
import pydevd; pydevd.settrace(stdoutToServer=True, stderrToServer=True)
File "...\org.python.pydev.core_6.3.3.201805051638\pysrc\pydevd.py", line 1189, in settrace
_locked_settrace(
File "...\org.python.pydev.core_6.3.3.201805051638\pysrc\pydevd.py", line 1295, in _locked_settrace
debugger.set_tracing_for_untraced_contexts(ignore_frame=get_frame(), overwrite_prev_trace=overwrite_prev_trace)
File "...\org.python.pydev.core_6.3.3.201805051638\pysrc\pydevd.py", line 595, in set_tracing_for_untraced_contexts
for frame in additional_info.iter_frames(t):
File "...\org.python.pydev.core_6.3.3.201805051638\pysrc\_pydevd_bundle\pydevd_additional_thread_info_regular.py", line 117, in iter_frames
current_frames = _current_frames()
File "...\org.python.pydev.core_6.3.3.201805051638\pysrc\_pydevd_bundle\pydevd_additional_thread_info_regular.py", line 26, in _current_frames
as_array = thread_states.entrySet().toArray()
AttributeError: 'java.lang.ThreadLocal' object has no attribute 'entrySet'
Trying vanilla pdb (option 3) prints the (Pdb) prompt in the PyDev Scripting console but one can't enter any text and go into interactive mode, eg:
(Pdb) IOError: IOError(...nvalid',)
> ...\org.python.pydev.jython_6.3.3.201805051638\jysrc\template_helper.py(20)resolveAll()
-> ret = self._callable(context)
(Pdb)
Perhaps it's not possible. Any suggestions?
For future reference, I was eventually able to debug scripts by downloading jython 2.7.1 installer from maven. I then installed this jython to a temporary location. After backing up the jython plugin folder bundled with pydev, I copied and pasted the relevant directories over the pydev jython installer along with a copy of pydevd package. I was able to step through debug in separate instance of Eclipse after setting a breakpoint as described in my Option (2) above.
Thanks for your help in the comments #FabioZadrozny.
As I have very little knowledge about Linux, pretty much all I can do is copy and paste things from a good tutorial and in most cases simply hope nothing goes wrong. I really tried finding a solution on my own and searching the internet but to no avail (I found a number of quite similar things but no solution I understood enough to be able to adapt it on my own to fix my problem).
I've installed an osm tile server using this amazing tutorial and it works like a charm. Now I want to install umap, using this tutorial.
Everything works fine until I get to the line "umap collectstatic". The error I get is this:
(venv) $ sudo umap collectstatic
[sudo] Passwort für umap2:
You have requested to collect static files at the destination
location as specified in your settings:
/home/ybon/.virtualenvs/umap/var/static
This will overwrite existing files!
Are you sure you want to do this?
Type 'yes' to continue, or 'no' to cancel: yes
Traceback (most recent call last):
File "/usr/local/bin/umap", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/umap/bin/__init__.py", line 12, in main
management.execute_from_command_line()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 359, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 294, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 345, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 193, in handle
collected = self.collect()
File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 115, in collect
for path, storage in finder.list(self.ignore_patterns):
File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/finders.py", line 112, in list
for path in utils.get_files(storage, ignore_patterns):
File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/utils.py", line 28, in get_files
directories, files = storage.listdir(location)
File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py", line 399, in listdir
for entry in os.listdir(path):
OSError: [Errno 2] No such file or directory: '/home/ybon/Code/js/Leaflet.Storage'
Now, I get the something might be wrong with a setting in a config file somewhere, but changing the directory in local.py
doesn't seem to do anything (like I have set it to STATIC_ROOT = '/home/xxx_myusername_xxx/umap/var/static') - I have no idea where this "/home/ybon/Code/..." path even comes from! What settings ?
I sure didn't specify THIS path anywhere! And the folder is indeed nowhere to be found on my machine. Maybe using virtualenv is somehow generating it, and I can't find it on my machine because it IS virtual (as in "not really there physically") but this is just a very wild guess and I don't really know what I'm talking about.
(I tried running the command with and without sudo and it doesn't change anything).
I have always wanted to install a tile server and have tried the tutorials you have given today. So I'm a learner like you!
Installing the Tile Server with the tutorial https://www.linuxbabe.com/linux-server/openstreetmap-tile-server-ubuntu-16-04 was really straightforward. I only used the part for Rhineland Palatinate.
With Umap (https://umap-project.readthedocs.io/en/latest/ubuntu/#tutorial) I had some problems.
1. A port was used twice. I changed the port for Apache.
2. After creating the local configuration (wget https://raw.githubusercontent.com/umap-project/umap/master/umap/settings/local.py.sample -O /etc/umap/umap.conf) this file was not immediately recognized. I helped myself by changing the file before executing the command "umap migrate".
I have made the following changes:
# For static deployment
STATIC_ROOT = '/etc/umap/var/static'
# For users' statics (geojson mainly)
MEDIA_ROOT = '/etc/umap/umap/var/data'
# Umap Settings
UMAP_SETTINGS='/etc/umap/umap.conf'
STATIC_ROOT und MEDIA_ROOT I have changed, because so the user umap has all permissions. Then I set the envirement variable UMAP_SETTINGS because otherwise the settings file /etc/umap/umap.conf is not found.
( I also have no idea where this "/home/ybon/Code/..." path comes from. After the configuration file is properly loaded, the path is loaded from the configuration file. That's why that's not important anymore. )
Now I could use the following commands without errors:
(venv) $ umap collectstatic
Loaded local config from /etc/umap/umap.conf
You have requested to collect static files at the destination
location as specified in your settings:
/etc/umap/var/static
This will overwrite existing files!
Are you sure you want to do this?
Type 'yes' to continue, or 'no' to cancel: yes
Copying '/srv/umap/venv/lib/python3.5/site-packages/umap/static/favicon.ico'
...
290 static files copied to '/etc/umap/var/static'.
(venv) $ umap storagei18n
Loaded local config from /etc/umap/umap.conf
Processing English
Found file /etc/umap/var/static/storage/src/locale/en.json
Exporting to /etc/umap/var/static/storage/src/locale/en.js
..
Processing Deutsch
Found file /etc/umap/var/static/storage/src/locale/de.json
..
Found file /etc/umap/var/static/storage/src/locale/sk_SK.json
Exporting to /etc/umap/var/static/storage/src/locale/sk_SK.js
(venv) $ umap createsuperuser
Loaded local config from /etc/umap/umap.conf
Username (leave blank to use 'umap'):
Email address:
Password:
Password (again):
Superuser created successfully.
(venv) $ umap runserver 0.0.0.0:8000
Loaded local config from /etc/umap/umap.conf
Loaded local config from /etc/umap/umap.conf
Performing system checks...
System check identified no issues (0 silenced).
April 09, 2018 - 14:02:15
Django version 1.10.5, using settings 'umap.settings'
Starting development server at http://0.0.0.0:8000/
And finally I was able to use umap.