I am currently using space(0.101.0), tensorflox (1.0.0) and sputnik (0.9.3). I do have this issue happening :
super(Pool, self).__init__(app_name, app_version, path, **kwargs)
File "/Users/sebastien/xxx/venv/lib/python3.6/site-packages/sputnik/package_list.py", line 33, in __init__
self.load()
File "/Users/sebastien/xxx/venv/lib/python3.6/site-packages/sputnik/package_list.py", line 51, in load
for package in self.packages():
File "/Users/sebastien/xxx/venv/lib/python3.6/site-packages/sputnik/package_list.py", line 47, in packages
yield self.__class__.package_class(path=os.path.join(self.path, path))
File "/Users/sebastien/xxx/venv/lib/python3.6/site-packages/sputnik/package.py", line 15, in __init__
super(Package, self).__init__(defaults=meta['package'])
KeyError: 'package'
I have tried to combine different version but my make stuff is no more working. I have issues to build the overall system.
Based on what I gave as version used, This issue also happened when I am running the "python3 -m spacy.en.download"
Any idea?
Related
Have some of you succeeded to run the OpenMDAO SimpleGADriver with the run parallel mod enabled?
When I am trying to run their example from the official website (https://openmdao.org/newdocs/versions/latest/features/building_blocks/drivers/genetic_algorithm.html#running-a-ga-in-parallel), it fails every time and returns me this error:
Traceback (most recent call last):
File "C:\Program Files\Python39\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 24, in <module>
File "C:\Users\z004cn5y\.virtualenvs\python-flask-server-generated\lib\site-packages\openmdao\utils\hooks.py", line 130, in execute_hooks
ret = f(*args, **kwargs)
File "C:\Users\z004cn5y\.virtualenvs\python-flask-server-generated\lib\site-packages\openmdao\core\problem.py", line 856, in run_driver
return self.driver.run()
File "C:\Users\z004cn5y\.virtualenvs\python-flask-server-generated\lib\site-packages\openmdao\drivers\genetic_algorithm_driver.py", line 385, in run
desvar_new, obj, self._nfit = ga.execute_ga(x0, lower_bound, upper_bound, outer_bound,
File "C:\Users\z004cn5y\.virtualenvs\python-flask-server-generated\lib\site-packages\openmdao\drivers\genetic_algorithm_driver.py", line 716, in execute_ga
x_pop = comm.bcast(x_pop, root=0)
AttributeError: 'FakeComm' object has no attribute 'bcast'
Has someone ever faced and solved this issue ?
Many thanks in advance
It looks like instead of the script using a proper MPI installation, it's using the FakeComm, which is a dummy class used if mpi4py is not installed. You should be able to pip install mpi4py and then run that example in parallel.
I have been trying to install SerpentAI. I am on a Mac, and I have followed all the steps. I have all the dependencies, but when I use the keyword serpent it gives me this error.
I know I am missing a config file but I don't know where to find it, or how to solve this.
Any command that started with serpent gave me this error.
Here is the error
Traceback (most recent call last):
File "/anaconda3/bin/serpent", line 11, in <module>
load_entry_point('SerpentAI==2018.1.2', 'console_scripts', 'serpent')()
File "/anaconda3/lib/python3.6/site- packages/pkg_resources/__init__.py", line 480, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/anaconda3/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2691, in load_entry_point
return ep.load()
File "/anaconda3/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2322, in load
return self.resolve()
File "/anaconda3/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2328, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/anaconda3/lib/python3.6/site-packages/SerpentAI-2018.1.2-py3.6.egg/serpent/serpent.py", line 11, in <module>
from serpent.utilities import clear_terminal, display_serpent_logo, is_linux, is_macos, is_windows, is_unix, wait_for_crossbar
File "/anaconda3/lib/python3.6/site-packages/SerpentAI-2018.1.2-py3.6.egg/serpent/utilities.py", line 8, in <module>
from serpent.config import config
File "/anaconda3/lib/python3.6/site-packages/SerpentAI-2018.1.2-py3.6.egg/serpent/config.py", line 18, in <module>
raise Exception("Configuration file not found at: 'config/config.yml'...")
Exception: Configuration file not found at: 'config/config.yml'...
So I think I found the problem, you can clone the GitHub repo, and it will have the config files. Then when you run pip install it will clone a version, and that version won't have the config files.
I had the same issue when trying to run serpent vi sual_debugger after following all installation guide steps.
Try to run serpent setup, it might solve the problem you have. It helped in my case.
I am getting the following errors for HDFS client installation on Ambari. Have reset the server several times but still cannot get it resolved. Any idea how to fix that?
stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 120, in <module>
HdfsClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 36, in install
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 41, in configure
hdfs()
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py", line 61, in hdfs
group=params.user_group
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 67, in action_create
encoding = self.resource.encoding
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 87, in action_create
raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))
resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist
This is a soft link that link to /etc/hadoop/conf
I run
python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent --skip=users
After run it, it removes /etc/hadoop/conf
However, reinstall does not recreate it.
So you may have to create all conf files by yourself.
Hope someone can patch it.
yum -y erase hdp-select
If you have done installation multiple times, some packages might not be cleaned.
To remove all HDP packages and start with fresh installation, erase hdp-select.
If this is not helping, remove all the versions from /usr/hdp delete this directory if it contains multiple versions of hdp
Remove all the installed packages like hadoop,hdfs,zookeeper etc.
yum remove zookeeper* hadoop* hdp* zookeeper*
I ran into the same problem: I was using HDP 2.3.2 on Centos 7.
The first problem:
Some conf files point to the /etc//conf directory (same as they are supposed to)
However, /etc//conf points back to the other conf directory which leads to an endless loop.
I was able to fix this problem by removing the /etc//conf symbolic links and creating directories
The second problem
If you run the python scripts to clean up the installation and start over however, several directories do not get recreated, such as the hadoop-client directory. This leads to exact your error message. Also this cleanup script does not work out well as it does not clean several users and directories. You have to userdel and groupdel.
UPDATE:
It seems it was a problem of HDP 2.3.2. In HDP 2.3.4, I did not run into that problem any more.
Creating /usr/hdp/current/hadoop-client/conf on failing host should solve the problem.
I've written a python script using scrapy to crawl a site, and I'm trying to set up a job through jenkins to call the script nightly (this way it's very easy to see the output).
The machine I'm running jenkins on is a bitnami VM inside google compute.
I set up the command to run through the shell command in jenkins, and it's failing with the following error:
Building on master in workspace /opt/bitnami/apps/jenkins/jenkins_home/jobs/Scrape and Import myco/workspace [workspace] $ /bin/sh -xe /opt/bitnami/apache-tomcat/temp/hudson4165433582945317339.sh
+ /usr/local/myco/myscript.py -l /usr/local/myco/results/7.log -o /usr/local/myco/results/7.json -s /usr/local/myco/results/7.stats myspider
Traceback (most recent call last): File "/usr/local/myco/myscript.py", line 5, in <module>
from twisted.internet import reactor File "/usr/local/lib/python2.7/dist-packages/twisted/internet/reactor.py", line 38, in <module>
from twisted.internet import default File "/usr/local/lib/python2.7/dist-packages/twisted/internet/default.py", line 56, in <module>
install = _getInstallFunction(platform) File "/usr/local/lib/python2.7/dist-packages/twisted/internet/default.py", line 44, in _getInstallFunction
from twisted.internet.epollreactor import install File "/usr/local/lib/python2.7/dist-packages/twisted/internet/epollreactor.py", line 24, in <module>
from twisted.internet import posixbase File "/usr/local/lib/python2.7/dist-packages/twisted/internet/posixbase.py", line 23, in <module>
from twisted.internet import error, udp, tcp File "/usr/local/lib/python2.7/dist-packages/twisted/internet/tcp.py", line 29, in <module>
from twisted.internet._newtls import ( File "/usr/local/lib/python2.7/dist-packages/twisted/internet/_newtls.py", line 21, in <module>
from twisted.protocols.tls import TLSMemoryBIOFactory, TLSMemoryBIOProtocol File "/usr/local/lib/python2.7/dist-packages/twisted/protocols/tls.py", line 41, in <module>
from OpenSSL.SSL import Error, ZeroReturnError, WantReadError File "/usr/local/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL File "/usr/local/lib/python2.7/dist-packages/OpenSSL/rand.py", line 11, in <module>
from OpenSSL._util import ( File "/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py", line 7, in <module>
binding = Binding() File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 114, in __init__
self._ensure_ffi_initialized() File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 126, in _ensure_ffi_initialized
cls._modules, File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/utils.py", line 31, in load_library_for_binding
lib = ffi.verifier.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 97, in load_library
return self._load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 207, in _load_library
return self._vengine.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/vengine_cpy.py", line 155, in load_library
raise ffiplatform.VerificationError(error) cffi.ffiplatform.VerificationError: importing '/usr/local/lib/python2.7/dist-packages/cryptography/_Cryptography_cffi_e7d09016xc302a38b.so': /usr/local/lib/python2.7/dist-packages/cryptography/_Cryptography_cffi_e7d09016xc302a38b.so: symbol EC_GFp_nistp521_method, version OPENSSL_1.0.1 not defined in file libcrypto.so.1.0.0 with link time reference Build step 'Execute shell' marked build as failure Finished: FAILURE
I'm perplexed because when I run the same command (as my user and the user jenkins is running under, tomcat) I don't get this error, the script works fine.
I suspect this may have to do with the script being executed inside apache, but I'm at my wits end and googling hasn't turned up any obvious solutions.
Any idea as to how to solve this?
symbol EC_GFp_nistp521_method, version OPENSSL_1.0.1 not defined in file libcrypto.so.1.0.0
It looks like you are running a python compiled with OpenSSL 1.0.1 with a libcrypto from OpenSSL 1.0.0. It might be because you are running with a different python (at least compiled against a different OpenSSL version) but include files from your local python installation which expects the newer OpenSSL.
when trying to build indexes for elasticsearch on django-haystack, i get the error (full traceback below)
TypeError: index_queryset() got an unexpected keyword argument 'using'
It is on Python 2.6, Django 1.4, ElasticSearch 0.20.2. Previously i have encountered a similar prefetch error, which was the case of version mismatch between pyelasticsearch and requests library. i've tried to downgrade requests to 0.13, but no effect. pyelasticsearch is currently 0.3
any help very much appreciated!
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/var/www/myproj/myproj-env/lib/python2.6/site-packages/django/core/management/__init__.py", line 443, in execute_from_command_line
utility.execute()
File "/var/www/myproj/myproj-env/lib/python2.6/site-packages/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/www/myproj/myproj-env/lib/python2.6/site-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/var/www/myproj/myproj-env/lib/python2.6/site-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/var/www/myproj/myproj-env/src/django-haystack/haystack/management/commands/update_index.py", line 184, in handle
return super(Command, self).handle(*items, **options)
File "/var/www/myproj/myproj-env/lib/python2.6/site-packages/django/core/management/base.py", line 341, in handle
label_output = self.handle_label(label, **options)
File "/var/www/myproj/myproj-env/src/django-haystack/haystack/management/commands/update_index.py", line 210, in handle_label
self.update_backend(label, using)
File "/var/www/myproj/myproj-env/src/django-haystack/haystack/management/commands/update_index.py", line 239, in update_backend
end_date=self.end_date)
File "/var/www/myproj/myproj-env/src/django-haystack/haystack/indexes.py", line 157, in build_queryset
index_qs = self.index_queryset(using=using)
TypeError: index_queryset() got an unexpected keyword argument 'using'
The problem is in the version of django-haystack. The current version is 0.3, while few months ago it was 1.0, which is a bit of misunderstanding.
My way to solve the problem is simple and straightforward - install the latest version (0.3), then get the older version (in this case - 1.0) and simply overwrite haystack sources.
the conclusion: "older 1.0" works smooth with the latest versions of all prerequisites (pyelasticsearch, simplejson and requests), while "newer 0.3" doesn't