AssertionError when cloning Anaconda base environment - windows

I'm trying to create a clone of my base anaconda environment for a specific application. I want to use the clone as the base off of which to install application-specific packages. I used the following command to start the clone:
C:\Users\Liam>conda create -n retrievals --clone base
It made it a long way through the cloning process, and had just reach 100% on cloning anaconda-5.2.0, when it threw the assertion error below:
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\exceptions.py", line 819, in __call__
return func(*args, **kwargs)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\main.py", line 78, in _main
exit_code = do_call(args, p)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\conda_argparse.py", line 77, in do_call
exit_code = getattr(module, func_name)(args, parser)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\main_create.py", line 11, in execute
install(args, parser, 'create')
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\install.py", line 211, in install
clone(args.clone, prefix, json=context.json, quiet=context.quiet, index_args=index_args)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\cli\install.py", line 72, in clone
index_args=index_args)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\misc.py", line 277, in clone_env
force_extract=False, index_args=index_args)
File "C:\Users\Liam\Anaconda3\lib\site-packages\conda\misc.py", line 78, in explicit
assert not any(spec_pcrec[1] is None for spec_pcrec in specs_pcrecs)
AssertionError
$ C:\Users\Liam\Anaconda3\Scripts\conda create -n retrievals --clone base
Can anybody explain why this is happening and what I could try to fix it?
P.S. I'm doing this on Windows 10 if that helps at all.

I found workaround for it. You can just copy environment with base name.
cp -r /opt/conda/envs/base_env /opt/conda/envs/new_env
After that you can activate or update environment.
conda activate new_env
conda env update --name new_env --file environment.yaml --prune

Related

JMeter-Taurus Docker Image Error - "No personal config found, creating one at /.bzt-rc"

I am trying to run a performance test within a Drone pipeline using JMeter-Taurus. I am pulling the latest (stable) image for blazemeter-taurus. However, when running the test in a pipeline the following error is returned.
This is the contents of the Dockerfile:
FROM blazemeter/taurus
RUN apt-get update && \
python3 -m pip install s3cmd
COPY . /bzt/
COPY config/90-artifacts-dir.json /etc/bzt.d/
COPY config/90-no-console.json /etc/bzt.d/
COPY .bzt-rc /root/.bzt-rc
RUN chown -R 1000:1000 /bzt && \
sed -i -e '/^assistive_technologies=/s/^/#/' /etc/java-*-openjdk/accessibility.properties
USER 1000
VOLUME ["/bzt"]
WORKDIR /bzt
UPDATE:
The "USER 1000" part has been removed. And the following line was added:
RUN chmod +x /bzt
But this is the error message returned:
Traceback (most recent call last):
File "/usr/local/bin/bzt", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/dist-packages/bzt/cli.py", line 676, in main
executor = CLI(parsed_options)
File "/usr/local/lib/python3.9/dist-packages/bzt/cli.py", line 56, in __init__
self.setup_logging(options)
File "/usr/local/lib/python3.9/dist-packages/bzt/cli.py", line 111, in setup_logging
file_handler = logging.FileHandler(options.log, encoding="utf-8")
File "/usr/lib/python3.9/logging/__init__.py", line 1146, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib/python3.9/logging/__init__.py", line 1175, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding,
PermissionError: [Errno 13] Permission denied: '/bzt/bzt.log'
######## TEST RUN STATUS : 1 #######
cat: /bzt/bzt_artifacts/bzt.log: No such file or directory
cat: /bzt/bzt_artifacts/error.jtl: No such file or directory
ls: cannot access '/bzt/bzt_artifacts/': No such file or directory
Taurus Docker image is root-oriented, here is the Dockerfile
I fail to see why Taurus is trying to create its config under /.bzt-rc, it may be some this "Drone pipeline" specifics, my expectation is that your test will fail because your user 1000 doesn't have write permissions to /tmp folder where Taurus stores its Artifacts

Seems Ansible failed to get the temp files

Ansible failed while running the tasks and referring file/folder not exists in the ansible temp folder of a target machine. It has been working fine and not sure why all of sudden it stopped working
changed : false
module_stderr : Shared connection to 10.131.132.11 closed.\r\n
module_stdout : Traceback (most recent call last):\r\n File \"/root/.ansible/tmp/ansible-tmp-1662460628.845051-105063944436404/AnsiballZ_setup.py\", line 113, in <module>\r\n try:\r\n File \"/root/.ansible/tmp/ansible-tmp-1662460628.845051-105063944436404/AnsiballZ_setup.py\", line 98, in _ansiballz_main\r\n json_params = f.read()\r\n File \"/usr/lib64/python2.7/tempfile.py\", line 321, in mkdtemp\r\n dir = gettempdir()\r\n File \"/usr/lib64/python2.7/tempfile.py\", line 265, in gettempdir\r\n tempdir = _get_default_tempdir()\r\n File \"/usr/lib64/python2.7/tempfile.py\", line 212, in _get_default_tempdir\r\n (\"No usable temporary directory found in %s\" % dirlist))\r\nIOError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/root']\r\n
msg : MODULE FAILURE\nSee stdout/stderr for the exact error
rc : 1
You can start by checking the permissions if they changed.
I think the disk is probably full. I had the same issue and it was the case.
You can type df -h
Also you can verify the inode.
You can type df -i

`ansible --version` command throws an ERROR

Good Day guys
I have installed ansible on my mac, it was successfully installed but then when I run the command ansible --version it throws me a n error:
Unhandled error:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/ansible/config/manager.py", line 559, in update_config_data
value, origin = self.get_config_value_and_origin(config, configfile)
File "/Library/Python/2.7/site-packages/ansible/config/manager.py", line 503, in get_config_value_and_origin
value = ensure_type(value, defs[config].get('type'), origin=origin)
File "/Library/Python/2.7/site-packages/ansible/config/manager.py", line 124, in ensure_type
value = tempfile.mkdtemp(prefix=prefix, dir=value)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tempfile.py", line 339, in mkdtemp
_os.mkdir(file, 0700)
OSError: [Errno 13] Permission denied: '/Users/patrick/.ansible/tmp/ansible-local-37505vvsQNX'
Traceback (most recent call last):
File "/usr/local/bin/ansible", line 62, in <module>
import ansible.constants as C
File "/Library/Python/2.7/site-packages/ansible/constants.py", line 174, in <module>
config = ConfigManager()
File "/Library/Python/2.7/site-packages/ansible/config/manager.py", line 291, in __init__
self.update_config_data()
File "/Library/Python/2.7/site-packages/ansible/config/manager.py", line 571, in update_config_data
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
ansible.errors.AnsibleError: Invalid settings supplied for DEFAULT_LOCAL_TMP: [Errno 13] Permission denied: '/Users/patrick/.ansible/tmp/ansible-local-37505vvsQNX'
Appreciate your help guys.
Check if this is similar to iiab/iiab/issue 1212:
After some more digging, I found the real source of this problem was hardcoded paths in ansible.cfg
Or the more recent ansible/galaxy-dev issue 107, which leads to PR 110:
The default ansible temp dir ~/.ansible/tmp is accessed by ansible-doc via the galaxy-importer.
This works OK in the galaxy-dev local env, but on the CI environment it attempts to create dir /.ansible/tmp and fails.
This PR changes the default ansible temp dir to /tmp/ansible which is outside the user home (and in the local env, /tmp has greater permissions)
Try and change DEFAULT_LOCAL_TMP to /tmp.
~/.ansible/tmp is probably root, so not accessible. chown it to user
I have disabled 'gathering' in ansible.cfg to fix the issue.
# gathering = False
Below error occurring before commenting this.
~ % ansible
Unhandled error:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/ansible/6.6.0/libexec/lib/python3.10/site-packages/ansible/config/manager.py", line 627, in update_config_data
value, origin = self.get_config_value_and_origin(config, configfile)
File "/opt/homebrew/Cellar/ansible/6.6.0/libexec/lib/python3.10/site-packages/ansible/config/manager.py", line 586, in get_config_value_and_origin
raise AnsibleOptionsError('Invalid value "%s" for configuration option "%s", valid values are: %s' %
ansible.errors.AnsibleOptionsError: Invalid value "False" for configuration option "setting: DEFAULT_GATHERING ", valid values are: implicit, explicit, smart

Golang: Preview of managed VM app returns error

I'm trying to preview a Go docker (App Engine ManagedVM) app using the gcloud preview app run command.
But I keep getting this error:
Traceback (most recent call last):
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 83, in <module>
_run_file(__file__, globals())
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 79, in _run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 985, in <module>
main()
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 978, in main
dev_server.start(options)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 774, in start
self._dispatcher.start(options.api_host, apis.port, request_data)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 182, in start
_module, port = self._create_module(module_configuration, port)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 262, in _create_module
threadsafe_override=threadsafe_override)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 1463, in __init__
super(ManualScalingModule, self).__init__(**kwargs)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 514, in __init__
self._module_configuration)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 237, in _create_instance_factory
module_configuration=module_configuration)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/vm_runtime_factory.py", line 78, in __init__
timeout=self.DOCKER_D_REQUEST_TIMEOUT_SECS)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/google/appengine/tools/docker/containers.py", line 740, in NewDockerClient
client.ping()
File "/Users/jwesonga/google-cloud-sdk/./lib/docker/docker/client.py", line 711, in ping
return self._result(self._get(self._url('/_ping')))
File "/Users/jwesonga/google-cloud-sdk/./lib/docker/docker/client.py", line 76, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/lib/requests/requests/sessions.py", line 468, in get
return self.request('GET', url, **kwargs)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/lib/requests/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/lib/requests/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine/lib/requests/requests/adapters.py", line 384, in send
raise Timeout(e, request=request)
requests.exceptions.Timeout: (<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x10631c7d0>, 'Connection to 192.168.59.104 timed out. (connect timeout=60)')
ERROR: (gcloud.preview.app.run) DevAppSever failed with error code [1]
I've confirmed that docker is up and running using boot2docker status which returns running This was working before but after a machine reboot, nothing seems to work. Any ideas?
The main issue is:
File "/Users/jwesonga/google-cloud-sdk/platform/google_appengine
/lib/requests/requests/adapters.py", line 384, in send
raise Timeout(e, request=request)
requests.exceptions.Timeout:
(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object
at 0x10631c7d0>, 'Connection to 192.168.59.104 timed out.
(connect timeout=60)')
ERROR: (gcloud.preview.app.run) DevAppSever failed with error code [1]
Which is often the case when you have a proxy, and is discussed in pip issue 1805
It is supposed to be fixed in pip1.6, but just in case, you can try the workaround of alexandrem
/opt/venvs/ironic/lib/python2.6/site-packages/pip/_vendor/requests
/adapters.patch.py /opt/venvs/ironic/lib/python2.6/site-packages
/pip/_vendor/requests/adapters.py
209c209
if True or not proxy in self.proxy_manager:
^^^^
basically I just add a True to the condition on line 209 of the adapter.py to always create a ProxyManager instance, thus skipping the pool manager logic.
The gcloud command enable the ah_host process and also created the docker image of your app and passes it to the Docker daemon, in your case it seems that your docker daemon is not responding to the request. So to make sure,perform "sudo docker -d" to check if the Docker daemon is running on your machine or not.
Also check that, the path of the certificate you set correctly and value of the TLS_VERIFY is TRUE.
Go through the documentation [1] for the installation of Docker on MacOS
[1] https://docs.docker.com/installation/mac/

Windows hg authorisation failed

I have an hg repository in Windows, but the following comands :
hg pull
hg push
hg incoming
hg outgoing
all have the result :
abort: authorization failed
When I try to access my repository by a web browser, it asks for my credentials. I input them and I can access them without problems from the web browser.
In my mercurial.ini file, I've added
[auth]
bb.username = MyUserName
bb.password = MyPwd
and I've checked that the environment variable HGRCPATH is correct, but it didn't solve anything.
The output of hg incoming --debug --traceback is :
using http://My/Repo/url.com
sending capabilities command
Traceback (most recent call last):
File "mercurial\dispatch.pyc", line 97, in _runcatch
File "mercurial\dispatch.pyc", line 778, in _dispatch
File "mercurial\dispatch.pyc", line 549, in runcommand
File "mercurial\dispatch.pyc", line 869, in _runcommand
File "mercurial\dispatch.pyc", line 840, in checkargs
File "mercurial\dispatch.pyc", line 775, in <lambda>
File "mercurial\util.pyc", line 512, in check
File "mercurial\extensions.pyc", line 143, in wrap
File "mercurial\util.pyc", line 512, in check
File "hgext\mq.pyc", line 3528, in mqcommand
File "mercurial\util.pyc", line 512, in check
File "mercurial\commands.pyc", line 3854, in incoming
File "mercurial\hg.pyc", line 548, in incoming
File "mercurial\hg.pyc", line 500, in _incoming
File "mercurial\hg.pyc", line 122, in peer
File "mercurial\hg.pyc", line 102, in _peerorrepo
File "mercurial\httppeer.pyc", line 264, in instance
File "mercurial\httppeer.pyc", line 57, in _fetchcaps
File "mercurial\httppeer.pyc", line 197, in _call
File "hgext\largefiles\proto.pyc", line 174, in httprepocallstream
File "mercurial\httppeer.pyc", line 121, in _callstream
Abort: authorization failed
abort: authorization failed
If that's all you have in the [auth] section of the mercurial.ini you're missing the required .prefix entry. It's required to know what sites to use that username and password on. See http://www.selenic.com/mercurial/hgrc.5.html#auth for details on how to use the prefix.
Also make sure you see a http/s URL when you do hg paths. If you're seeing the ssh URLs then you need to set up a key not a password (or switch to the http/s URLs).

Resources