docker-compose and term2 on Mac - macos

I am on Mac, El-Capitan 10.11.5
Until Today, I was able to run the docker-daemon by calling "docker quickstart terminal"
then going into my project folder and doing a docker-compose up
Now, when I run that, I keep getting
docker-compose --verbose up --timeout 120
compose.config.config.find: Using configuration files: ./docker-compose.yml
docker.auth.auth.load_config: Found 'auths' section
docker.auth.auth.parse_auth: Found entry (registry=u'https://index.docker.io/v1/', username=u'my_user')
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "compose/cli/main.py", line 58, in main
File "compose/cli/main.py", line 106, in perform_command
File "compose/cli/command.py", line 34, in project_from_options
File "compose/cli/command.py", line 79, in get_project
File "compose/cli/command.py", line 55, in get_client
File "site-packages/docker/api/daemon.py", line 76, in version
File "site-packages/docker/utils/decorators.py", line 47, in inner
File "site-packages/docker/client.py", line 120, in _get
File "site-packages/requests/sessions.py", line 477, in get
File "site-packages/requests/sessions.py", line 465, in request
File "site-packages/requests/sessions.py", line 573, in send
File "site-packages/requests/adapters.py", line 415, in send
requests.exceptions.ConnectionError: ('Connection aborted.', error(2, 'No such file or directory'))
Is there a quick solution for this problem? my versions are
docker-machine version 0.7.0, build a650a40
Docker version 1.11.1, build 5604cbe
docker-compose version 1.7.1, build 0a9ab35
iterm2 Build 3.0.0
virtual Machine
Version 5.0.20 r106931

I simply solved this problem by adding the following line into my .profile file
alias docker_compose_run='eval "$(docker-machine env default)" && docker-compose up'
then I simply run docker_compose_run

Related

'azsphere tenant list' command error occurred

To check Azure Sphere tenant list, I used azsphere tenant list command, but an error occurred like below.
>azsphere tenant list --verbose
The command failed with an unexpected error. Here is the traceback:
No section: 'sphere'
Traceback (most recent call last):
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 231, in invoke
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 657, in execute
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 720, in _run_jobs_serially
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 712, in _run_job
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/cloud_exception_handler.py", line 189, in cloud_exception_handler
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 691, in _run_job
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 328, in __call__
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/command_operation.py", line 112, in handler
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/client_factory.py", line 122, in cf_tenants
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/client_factory.py", line 35, in cf_public_api
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/profile.py", line 220, in get_login_credentials
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/config.py", line 144, in get_azsphere_config_value
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/config.py", line 114, in get_global_config_value
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/config.py", line 97, in get
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/config.py", line 92, in get
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/config.py", line 206, in get
File "configparser.py", line 781, in get
File "configparser.py", line 1149, in _unify_values
configparser.NoSectionError: No section: 'sphere'
To open an issue, please run: 'az feedback'
Command ran in 0.159 seconds (init: 0.008, invoke: 0.151)
I have 3 Azure Sphere tenants.
When I use "Azure Sphere Classic Developer Command Prompt (Deprecated)", it works.
The environment I have been using since before, and it seems that the problem occurred after installing the latest version of the Azure Sphere SDK.
Below is the current SDK version.
>azsphere show-version
----------------
Azure Sphere SDK
================
22.02.3.41775
----------------
How can I fix this?
This is a known issue with the 22.02 SDK and will be fixed in the next release. For now, if you login into the CLI (azsphere login), this issue will be resolved.

installing requirements-dev.txt from Qiskit raises an error with SQNomad

I am trying to install the following requirements with pip:
coverage>=4.4.0
hypothesis>=4.24.3
ipywidgets>=7.3.0
jupyter
matplotlib>=2.1
pillow>=4.2.1
pycodestyle
pydot
astroid==2.5
pylint==2.7.1
stestr>=2.0.0
PyGithub
wheel
cython>=0.27.1
pylatexenc>=1.4
ddt>=1.2.0,!=1.4.0
seaborn>=0.9.0
reno>=3.2.0
Sphinx>=1.8.3,<3.1.0
qiskit-sphinx-theme>=1.6
sphinx-autodoc-typehints
jupyter-sphinx
sphinx-panels
pygments>=2.4
tweedledum==0.1b0
networkx>=2.2
scikit-learn>=0.20.0
scikit-quant;platform_system != 'Windows'
jax;platform_system != 'Windows'
jaxlib;platform_system != 'Windows'
At SQNomad I get the following error:
Downloading SQNomad-0.1.0.tar.gz (385 kB)
|████████████████████████████████| 385 kB 5.6 MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... error
ERROR: Command errored out with exit status 1:
command: /Users/jim-felixlobsien/opt/anaconda3/bin/python /Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/tmpta1cj_q6
cwd: /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-install-pw0tuweh/SQNomad
Complete output (55 lines):
running dist_info
creating /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info
writing /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/PKG-INFO
writing dependency_links to /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/dependency_links.txt
writing requirements to /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/requires.txt
writing top-level names to /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/top_level.txt
writing manifest file '/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/SOURCES.txt'
Traceback (most recent call last):
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 253, in run_setup
super(_BuildMetaLegacyBackend,
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 77, in <module>
setup(
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/dist_info.py", line 31, in run
egg_info.run()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 299, in run
self.find_sources()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 306, in find_sources
mm.run()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 541, in run
self.add_defaults()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 577, in add_defaults
sdist.add_defaults(self)
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/command/sdist.py", line 228, in add_defaults
self._add_defaults_ext()
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/command/sdist.py", line 311, in _add_defaults_ext
build_ext = self.get_finalized_command('build_ext')
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/cmd.py", line 299, in get_finalized_command
cmd_obj.ensure_finalized()
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/numpy/distutils/command/build_ext.py", line 86, in finalize_options
self.set_undefined_options('build',
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/cmd.py", line 290, in set_undefined_options
setattr(self, dst_option, getattr(src_cmd_obj, src_option))
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/cmd.py", line 103, in __getattr__
raise AttributeError(attr)
AttributeError: cpu_baseline
----------------------------------------
ERROR: Command errored out with exit status 1: /Users/jim-felixlobsien/opt/anaconda3/bin/python /Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/tmpta1cj_q6 Check the logs for full command output.
Does anybody knows what is happening? I am trying to follow the constructions in the following video:
https://www.youtube.com/watch?v=QjZdvNgYl3s&t=731s
The task is to contribute to qiskit!
Thank you very much in advance
The problem comes from a dependency in scikit-quant. The most direct way to fix this issue is installing the previous (0.7) version of scikit-quant instead of the current one 0.8, as Qiskit seems compatible to it:
pip install 'scikit-quant==0.7'
Fixed with SQNomad 0.2.1; thanks to luciano for reporting.
But it's now also made an optional component of scikit-quant, to install with scikit-quant[NOMAD] if desired. It's quite a large C++ library and needs to be split up between its Python dependent and Python independent parts to prevent creating massive wheels, then the installation will be easier and it's optional status can be reverted.
All optimizers can in any case be installed and used independently (e.g. with python -m pip install SQNomad for NOMAD).

Unable to start ElastAlert : Only timezones from the pytz library are supported

Unable to test rule in elastic, I am running following command in terminal
elastalert-test-rule --config config.yaml example_rules/example_frequency.yaml
File "/usr/local/bin/elastalert-test-rule", line 11, in <module>
load_entry_point('elastalert==0.2.4', 'console_scripts', 'elastalert-test-rule')()
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/test_rule.py", line 445, in main
test_instance.run_rule_test()
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/test_rule.py", line 437, in run_rule_test
self.run_elastalert(rule_yaml, conf, args)
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/test_rule.py", line 307, in run_elastalert
client = ElastAlerter(['--debug'])
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/elastalert.py", line 173, in __init__
if not self.init_rule(rule):
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/elastalert.py", line 1038, in init_rule
jitter=5)
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/schedulers/base.py", line 420, in add_job
'trigger': self._create_trigger(trigger, trigger_args),
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/schedulers/base.py", line 921, in _create_trigger
return self._create_plugin_instance('trigger', trigger, trigger_args)
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/schedulers/base.py", line 906, in _create_plugin_instance
return plugin_cls(**constructor_kwargs)
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/triggers/interval.py", line 38, in __init__
self.timezone = astimezone(timezone)
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/util.py", line 93, in astimezone
raise TypeError('Only timezones from the pytz library are supported')
TypeError: Only timezones from the pytz library are supported
I have done following steps :
sudo apt-get update -y
sudo apt-get install -y python3-tzlocal
Also,
Added 'tzlocal<3.0', to setup.py
But after all this also I am getting the same error.
Please help!
You may try running setup again :
python3 setup.py install

DCOS CLI install not working

Running DCOS 1.8 on Centos.
I installed the CLI as below:
https://docs.mesosphere.com/1.8/usage/cli/install/
When I try to do a spark install I get the below error. Any ideas?
./dcos package install spark
ip-172-16-6-6.localdomain's username: admin
admin#ip-172-16-6-6.localdomain's password:
Traceback (most recent call last):
File "cli/dcoscli/subcommand.py", line 99, in run_and_capture
File "cli/dcoscli/package/main.py", line 21, in main
File "cli/dcoscli/util.py", line 21, in wrapper
File "cli/dcoscli/package/main.py", line 35, in _main
File "dcos/cmds.py", line 43, in execute
File "cli/dcoscli/package/main.py", line 356, in _install
File "dcos/cosmospackage.py", line 191, in get_package_version
File "dcos/cosmospackage.py", line 366, in __init__
File "cli/env/lib/python3.4/site-packages/requests/models.py", line 826, in json
File "json/__init__.py", line 318, in loads
File "json/decoder.py", line 343, in decode
File "json/decoder.py", line 361, in raw_decode
ValueError: Expecting value: line 1 column 1 (char 0)
I just had this issue to. For me it was because I forgot to install virtualenv. pip install virtualenv.

Server install hdfs client fail

I am getting the following errors for HDFS client installation on Ambari. Have reset the server several times but still cannot get it resolved. Any idea how to fix that?
stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 120, in <module>
HdfsClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 36, in install
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 41, in configure
hdfs()
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py", line 61, in hdfs
group=params.user_group
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 67, in action_create
encoding = self.resource.encoding
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 87, in action_create
raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))
resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist
This is a soft link that link to /etc/hadoop/conf
I run
python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent --skip=users
After run it, it removes /etc/hadoop/conf
However, reinstall does not recreate it.
So you may have to create all conf files by yourself.
Hope someone can patch it.
yum -y erase hdp-select
If you have done installation multiple times, some packages might not be cleaned.
To remove all HDP packages and start with fresh installation, erase hdp-select.
If this is not helping, remove all the versions from /usr/hdp delete this directory if it contains multiple versions of hdp
Remove all the installed packages like hadoop,hdfs,zookeeper etc.
yum remove zookeeper* hadoop* hdp* zookeeper*
I ran into the same problem: I was using HDP 2.3.2 on Centos 7.
The first problem:
Some conf files point to the /etc//conf directory (same as they are supposed to)
However, /etc//conf points back to the other conf directory which leads to an endless loop.
I was able to fix this problem by removing the /etc//conf symbolic links and creating directories
The second problem
If you run the python scripts to clean up the installation and start over however, several directories do not get recreated, such as the hadoop-client directory. This leads to exact your error message. Also this cleanup script does not work out well as it does not clean several users and directories. You have to userdel and groupdel.
UPDATE:
It seems it was a problem of HDP 2.3.2. In HDP 2.3.4, I did not run into that problem any more.
Creating /usr/hdp/current/hadoop-client/conf on failing host should solve the problem.

Resources