I've recently installed hue and am having problems connecting to the interface via an external host, i can connect locally fine. My hue.ini file is configured as http_host=0.0.0.0 http_port=8888. I've seen some posts about how to fix this by setting "Bind Hue Server to Wildcard Address" in Cloudera Manager. I do not have Cloudera Manager, what is the corresponding way to do this in a standalone hue installation?
error.log shows the following
[24/Nov/2015 03:02:12 -0800] models ERROR error syncing oozie
Traceback (most recent call last):
File "/usr/local/hue/desktop/core/src/desktop/models.py", line 269, in sync
from oozie.models import Workflow, Coordinator, Bundle
ImportError: No module named oozie.models
[24/Nov/2015 03:02:12 -0800] models ERROR error syncing beeswax
Traceback (most recent call last):
File "/usr/local/hue/desktop/core/src/desktop/models.py", line 296, in sync
from beeswax.models import SavedQuery
ImportError: No module named beeswax.models
[24/Nov/2015 03:02:12 -0800] models ERROR error syncing pig
Traceback (most recent call last):
File "/usr/local/hue/desktop/core/src/desktop/models.py", line 308, in sync
from pig.models import PigScript
ImportError: No module named pig.models
[24/Nov/2015 03:02:12 -0800] models ERROR error syncing search
Traceback (most recent call last):
File "/usr/local/hue/desktop/core/src/desktop/models.py", line 318, in sync
from search.models import Collection
ImportError: No module named search.models
Related
When trying to import a Protogen model (.ckpt file type) to Diffusion Bee, I keep getting this error:
Error Traceback (most recent call last):
File "convert_model.py", line 28, in
KeyError: 'state_dict'
[83158] Failed to execute script 'convert_model' due to unhandled exception!
The model should import without issue.
I’ve rebooted my system and then run all my containers using the vendor/bin/sail up command, the only one that failed to reload was MySQL. The error is the following :
ERROR: for mysql a bytes-like object is required, not 'str'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 261, in _raise_for_status
response.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.25/containers/afdd1cbf7f45d9b20612bca
f73eef1b0bc1dd631bc6aa3dcfbf630c64e8a3662/start
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/compose/service.py", line 625, in start_container
container.start()
File "/usr/lib/python3/dist-packages/compose/container.py", line 241, in start
return self.client.start(self.id, **options)
File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/usr/lib/python3/dist-packages/docker/api/container.py", line 1095, in start
self._raise_for_status(res)
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/usr/lib/python3/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("b'Ports are not available: listen tcp 0.0.0.0:3306: bind: An attempt was made
to access a socket in a way forbidden by its access permissions.'")
I’m running this container on ubuntu server 20.04.
It might fix the problem if you provide an absolute path to your nginx/mysql conf file. I haven't tried the solution yet.
I am using minio 5.0.1 with this command:
pip install minio
But I still get this error
Traceback (most recent call last):
File "minio.py", line 2, in <module>
from minio import Minio
File "/root/minio.py", line 2, in <module>
from minio import Minio
ImportError: cannot import name Minio
i guess it's to late for this answer but anyways:
since you call your python file minio.py he tries to import this file instead of the minio package and failes to find an object inside called Minio.
Rename your py file to miniotest.py and it should work.
At the moment I'm trying to create a cluster in aws ec2 with Graphlab Create. The code is as follows:
import graphlab as gl
ec2config = gl.deploy.Ec2Config(region='us-west-2', instance_type='m3.large',
aws_access_key_id='secret-acces-key-id',
aws_secret_access_key='secret-access-key')
ec2 = gl.deploy.ec2_cluster.create(name='Test Cluster',
s3_path='s3://test-big-data-2016', ec2_config=ec2config, idle_shutdown_timeout=3600, num_hosts=1)
When the above code is executed I get the following error:
Traceback (most recent call last):
File "test.py", line 59, in
ec2 = gl.deploy.ec2_cluster.create(name='Test Cluster', s3_path='s3://test-big-data-2016', ec2_config=ec2config, idle_shutdown_timeout=36000, num_hosts=1)
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/ec2_cluster.py", line 83, in create
cluster.start()
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/ec2_cluster.py", line 233, in start
self.idle_shutdown_timeout
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/_executionenvironment.py", line 372, in _start_commander_host
raise RuntimeError('Unable to start host(s). Please terminate '
RuntimeError: Unable to start host(s). Please terminate manually from the AWS console.
When I look in EC2 Management Console a new instance is launched and running. But still getting the error in the terminal.
I really don't know what I'm doing wrong here. I followed the exact instructions from: https://turi.com/learn/userguide/deployment/pipeline-example.html
The following code is from the python 2.6 manual.
from multiprocessing import Process
import os
def info(title):
print(title)
print('module name:', 'me')
print('parent process:', os.getppid())
print('process id:', os.getpid())
def f(name):
info('function f')
print('hello', name)
if __name__ == '__main__':
info('main line')
p = Process(target=f, args=('bob',))
p.start()
p.join()
This creates the following stack traces:
Traceback (most recent call last):
File "threading.py", line 1, in <module>
from multiprocessing import Process
File "/usr/lib/python2.6/multiprocessing/__init__.py", line 64, in <module>
from multiprocessing.util import SUBDEBUG, SUBWARNING
File "/usr/lib/python2.6/multiprocessing/util.py", line 287, in <module>
class ForkAwareLocal(threading.local):
AttributeError: 'module' object has no attribute 'local'
Exception AttributeError: '_shutdown' in <module 'threading' from '/home/v0idnull/tmp/pythreads/threading.pyc'> ignored
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.6/multiprocessing/util.py", line 258, in _exit_function
info('process shutting down')
TypeError: 'NoneType' object is not callable
Error in sys.exitfunc:
Traceback (most recent call last):
File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.6/multiprocessing/util.py", line 258, in _exit_function
info('process shutting down')
TypeError: 'NoneType' object is not callable
I'm completely clueless as to WHY this is happening, and google has given me very little to work with.
that code runs fine on my machine:
Ubuntu 10.10, Python 2.6.6 64-bit.
but your error is actually because you have a file named 'threading.py' that you are running this code from (see the stack-trace details). this is causing a namespace mismatch, since the multiprocessing module needs the 'real' threading module. try renaming your file to something other than 'threading.py' and running it again.
also... the example you posted is not from the Python 2.6 docs... it is from the Python 3.x docs. make sure you are reading the docs for the version that matches what you are running.