DCOS CLI install not working - mesos

Running DCOS 1.8 on Centos.
I installed the CLI as below:
https://docs.mesosphere.com/1.8/usage/cli/install/
When I try to do a spark install I get the below error. Any ideas?
./dcos package install spark
ip-172-16-6-6.localdomain's username: admin
admin#ip-172-16-6-6.localdomain's password:
Traceback (most recent call last):
File "cli/dcoscli/subcommand.py", line 99, in run_and_capture
File "cli/dcoscli/package/main.py", line 21, in main
File "cli/dcoscli/util.py", line 21, in wrapper
File "cli/dcoscli/package/main.py", line 35, in _main
File "dcos/cmds.py", line 43, in execute
File "cli/dcoscli/package/main.py", line 356, in _install
File "dcos/cosmospackage.py", line 191, in get_package_version
File "dcos/cosmospackage.py", line 366, in __init__
File "cli/env/lib/python3.4/site-packages/requests/models.py", line 826, in json
File "json/__init__.py", line 318, in loads
File "json/decoder.py", line 343, in decode
File "json/decoder.py", line 361, in raw_decode
ValueError: Expecting value: line 1 column 1 (char 0)

I just had this issue to. For me it was because I forgot to install virtualenv. pip install virtualenv.

Related

'azsphere tenant list' command error occurred

To check Azure Sphere tenant list, I used azsphere tenant list command, but an error occurred like below.
>azsphere tenant list --verbose
The command failed with an unexpected error. Here is the traceback:
No section: 'sphere'
Traceback (most recent call last):
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 231, in invoke
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 657, in execute
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 720, in _run_jobs_serially
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 712, in _run_job
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/cloud_exception_handler.py", line 189, in cloud_exception_handler
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 691, in _run_job
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 328, in __call__
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/command_operation.py", line 112, in handler
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/client_factory.py", line 122, in cf_tenants
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/client_factory.py", line 35, in cf_public_api
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/profile.py", line 220, in get_login_credentials
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/config.py", line 144, in get_azsphere_config_value
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/sphere/cli/core/config.py", line 114, in get_global_config_value
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/config.py", line 97, in get
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/config.py", line 92, in get
File "D:\a\_work\1\s\azsphere-cli\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/config.py", line 206, in get
File "configparser.py", line 781, in get
File "configparser.py", line 1149, in _unify_values
configparser.NoSectionError: No section: 'sphere'
To open an issue, please run: 'az feedback'
Command ran in 0.159 seconds (init: 0.008, invoke: 0.151)
I have 3 Azure Sphere tenants.
When I use "Azure Sphere Classic Developer Command Prompt (Deprecated)", it works.
The environment I have been using since before, and it seems that the problem occurred after installing the latest version of the Azure Sphere SDK.
Below is the current SDK version.
>azsphere show-version
----------------
Azure Sphere SDK
================
22.02.3.41775
----------------
How can I fix this?
This is a known issue with the 22.02 SDK and will be fixed in the next release. For now, if you login into the CLI (azsphere login), this issue will be resolved.

installing requirements-dev.txt from Qiskit raises an error with SQNomad

I am trying to install the following requirements with pip:
coverage>=4.4.0
hypothesis>=4.24.3
ipywidgets>=7.3.0
jupyter
matplotlib>=2.1
pillow>=4.2.1
pycodestyle
pydot
astroid==2.5
pylint==2.7.1
stestr>=2.0.0
PyGithub
wheel
cython>=0.27.1
pylatexenc>=1.4
ddt>=1.2.0,!=1.4.0
seaborn>=0.9.0
reno>=3.2.0
Sphinx>=1.8.3,<3.1.0
qiskit-sphinx-theme>=1.6
sphinx-autodoc-typehints
jupyter-sphinx
sphinx-panels
pygments>=2.4
tweedledum==0.1b0
networkx>=2.2
scikit-learn>=0.20.0
scikit-quant;platform_system != 'Windows'
jax;platform_system != 'Windows'
jaxlib;platform_system != 'Windows'
At SQNomad I get the following error:
Downloading SQNomad-0.1.0.tar.gz (385 kB)
|████████████████████████████████| 385 kB 5.6 MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... error
ERROR: Command errored out with exit status 1:
command: /Users/jim-felixlobsien/opt/anaconda3/bin/python /Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/tmpta1cj_q6
cwd: /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-install-pw0tuweh/SQNomad
Complete output (55 lines):
running dist_info
creating /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info
writing /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/PKG-INFO
writing dependency_links to /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/dependency_links.txt
writing requirements to /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/requires.txt
writing top-level names to /private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/top_level.txt
writing manifest file '/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-modern-metadata-j_5tehl8/SQNomad.egg-info/SOURCES.txt'
Traceback (most recent call last):
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 253, in run_setup
super(_BuildMetaLegacyBackend,
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 77, in <module>
setup(
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/dist_info.py", line 31, in run
egg_info.run()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 299, in run
self.find_sources()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 306, in find_sources
mm.run()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 541, in run
self.add_defaults()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 577, in add_defaults
sdist.add_defaults(self)
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/command/sdist.py", line 228, in add_defaults
self._add_defaults_ext()
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/command/sdist.py", line 311, in _add_defaults_ext
build_ext = self.get_finalized_command('build_ext')
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/cmd.py", line 299, in get_finalized_command
cmd_obj.ensure_finalized()
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/private/var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/pip-build-env-8qqwe_v9/overlay/lib/python3.8/site-packages/numpy/distutils/command/build_ext.py", line 86, in finalize_options
self.set_undefined_options('build',
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/cmd.py", line 290, in set_undefined_options
setattr(self, dst_option, getattr(src_cmd_obj, src_option))
File "/Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/distutils/cmd.py", line 103, in __getattr__
raise AttributeError(attr)
AttributeError: cpu_baseline
----------------------------------------
ERROR: Command errored out with exit status 1: /Users/jim-felixlobsien/opt/anaconda3/bin/python /Users/jim-felixlobsien/opt/anaconda3/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/h4/1z5lkc1x4nb2s31y0f6y9r200000gn/T/tmpta1cj_q6 Check the logs for full command output.
Does anybody knows what is happening? I am trying to follow the constructions in the following video:
https://www.youtube.com/watch?v=QjZdvNgYl3s&t=731s
The task is to contribute to qiskit!
Thank you very much in advance
The problem comes from a dependency in scikit-quant. The most direct way to fix this issue is installing the previous (0.7) version of scikit-quant instead of the current one 0.8, as Qiskit seems compatible to it:
pip install 'scikit-quant==0.7'
Fixed with SQNomad 0.2.1; thanks to luciano for reporting.
But it's now also made an optional component of scikit-quant, to install with scikit-quant[NOMAD] if desired. It's quite a large C++ library and needs to be split up between its Python dependent and Python independent parts to prevent creating massive wheels, then the installation will be easier and it's optional status can be reverted.
All optimizers can in any case be installed and used independently (e.g. with python -m pip install SQNomad for NOMAD).

Unable to start ElastAlert : Only timezones from the pytz library are supported

Unable to test rule in elastic, I am running following command in terminal
elastalert-test-rule --config config.yaml example_rules/example_frequency.yaml
File "/usr/local/bin/elastalert-test-rule", line 11, in <module>
load_entry_point('elastalert==0.2.4', 'console_scripts', 'elastalert-test-rule')()
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/test_rule.py", line 445, in main
test_instance.run_rule_test()
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/test_rule.py", line 437, in run_rule_test
self.run_elastalert(rule_yaml, conf, args)
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/test_rule.py", line 307, in run_elastalert
client = ElastAlerter(['--debug'])
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/elastalert.py", line 173, in __init__
if not self.init_rule(rule):
File "/usr/local/lib/python3.6/dist-packages/elastalert-0.2.4-py3.6.egg/elastalert/elastalert.py", line 1038, in init_rule
jitter=5)
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/schedulers/base.py", line 420, in add_job
'trigger': self._create_trigger(trigger, trigger_args),
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/schedulers/base.py", line 921, in _create_trigger
return self._create_plugin_instance('trigger', trigger, trigger_args)
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/schedulers/base.py", line 906, in _create_plugin_instance
return plugin_cls(**constructor_kwargs)
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/triggers/interval.py", line 38, in __init__
self.timezone = astimezone(timezone)
File "/usr/local/lib/python3.6/dist-packages/APScheduler-3.6.3-py3.6.egg/apscheduler/util.py", line 93, in astimezone
raise TypeError('Only timezones from the pytz library are supported')
TypeError: Only timezones from the pytz library are supported
I have done following steps :
sudo apt-get update -y
sudo apt-get install -y python3-tzlocal
Also,
Added 'tzlocal<3.0', to setup.py
But after all this also I am getting the same error.
Please help!
You may try running setup again :
python3 setup.py install

Installing `transformers` on HPC Cluster

I'm trying to install the transformers library on HPC. I do:
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e . --user
All three of these work as expected, with the last output being:
Successfully installed dataclasses-0.7 numpy-1.19.0 tokenizers-0.8.1rc2 transformers
Then, I try python -c "import transformers" but I get the following error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/maths/btech/mt1170727/transformers/src/transformers/__init__.py", line 23, in <module>
from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig
File "/home/maths/btech/mt1170727/transformers/src/transformers/configuration_albert.py", line 18, in <module>
from .configuration_utils import PretrainedConfig
File "/home/maths/btech/mt1170727/transformers/src/transformers/configuration_utils.py", line 25, in <module>
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
File "/home/maths/btech/mt1170727/transformers/src/transformers/file_utils.py", line 37, in <module>
import torch
File "/home/soft/PYTHON/3.6.0/ucs4/gnu/447/lib/python3.6/site-packages/torch/__init__.py", line 125, in <module>
_load_global_deps()
File "/home/soft/PYTHON/3.6.0/ucs4/gnu/447/lib/python3.6/site-packages/torch/__init__.py", line 83, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/home/soft/PYTHON/3.6.0/ucs4/gnu/447/lib/python3.6/ctypes/__init__.py", line 344, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libnvToolsExt.so.1: cannot open shared object file: No such file or directory
I have done as was written in the documentation, and can't see why I'm facing this error. Any help would be great. Thanks...

docker-compose and term2 on Mac

I am on Mac, El-Capitan 10.11.5
Until Today, I was able to run the docker-daemon by calling "docker quickstart terminal"
then going into my project folder and doing a docker-compose up
Now, when I run that, I keep getting
docker-compose --verbose up --timeout 120
compose.config.config.find: Using configuration files: ./docker-compose.yml
docker.auth.auth.load_config: Found 'auths' section
docker.auth.auth.parse_auth: Found entry (registry=u'https://index.docker.io/v1/', username=u'my_user')
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "compose/cli/main.py", line 58, in main
File "compose/cli/main.py", line 106, in perform_command
File "compose/cli/command.py", line 34, in project_from_options
File "compose/cli/command.py", line 79, in get_project
File "compose/cli/command.py", line 55, in get_client
File "site-packages/docker/api/daemon.py", line 76, in version
File "site-packages/docker/utils/decorators.py", line 47, in inner
File "site-packages/docker/client.py", line 120, in _get
File "site-packages/requests/sessions.py", line 477, in get
File "site-packages/requests/sessions.py", line 465, in request
File "site-packages/requests/sessions.py", line 573, in send
File "site-packages/requests/adapters.py", line 415, in send
requests.exceptions.ConnectionError: ('Connection aborted.', error(2, 'No such file or directory'))
Is there a quick solution for this problem? my versions are
docker-machine version 0.7.0, build a650a40
Docker version 1.11.1, build 5604cbe
docker-compose version 1.7.1, build 0a9ab35
iterm2 Build 3.0.0
virtual Machine
Version 5.0.20 r106931
I simply solved this problem by adding the following line into my .profile file
alias docker_compose_run='eval "$(docker-machine env default)" && docker-compose up'
then I simply run docker_compose_run

Resources