Geeting an error while using mirror in hardops. I am using mac and it is working fine in windows. can it not be used in Mac?
Traceback (most recent call last):
File "///Library/Application Support/Blender/2.82/scripts/addons/HOps/operators/Gizmos/mirror.py", line 347, in invoke
current_tool = ToolSelectPanelHelper._tool_get_active(context, 'VIEW_3D', None)[0][0]
File "/Applications/Blender.app/Contents/Resources/2.82/scripts/startup/bl_ui/space_toolsystem_common.py", line 250, in _tool_get_active
for item in ToolSelectPanelHelper._tools_flatten(cls.tools_from_context(context, mode)):
AttributeError: type object 'ToolSelectPanelHelper' has no attribute 'tools_from_context'
location: :-1
Maybe it can help: I had this error on windows with previous currium Hard Ops, which is solved with latest Hard Ops update : Neodymium
Link to Hard Ops on BlenderMarket
Related
I'm on MacOS Big Sur trying to run rfcat. I am running anaconda as well and I have set up an environment with Python2.7 when I originally got errors with Python3.x. I have downloaded the pyusb, pyreadline, ipython, PySide2, and the libusb dependancies. Libusb seems to be giving me the most trouble. I keep getting the following error:
Error in resetup():NoBackendError('No backend available',)
Error in resetup():NoBackendError('No backend available',)
Error in resetup():NoBackendError('No backend available',)
^CTraceback (most recent call last):
File "/opt/anaconda3/envs/rftools/bin/rfcat", line 4, in <module>
__import__('pkg_resources').run_script('rfcat==1.9.5', 'rfcat')
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/pkg_resources/__init__.py", line 666, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1469, in run_script
exec(script_code, namespace, namespace)
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/rfcat-1.9.5-py2.7.egg/EGG-INFO/scripts/rfcat", line 63, in <module>
File "build/bdist.macosx-10.7-x86_64/egg/rflib/__init__.py", line 208, in interactive
File "build/bdist.macosx-10.7-x86_64/egg/rflib/chipcon_nic.py", line 103, in __init__
File "build/bdist.macosx-10.7-x86_64/egg/rflib/chipcon_usb.py", line 93, in __init__
File "build/bdist.macosx-10.7-x86_64/egg/rflib/chipcon_usb.py", line 238, in resetup
KeyboardInterrupt
From my research so far, backend is how pyusb refers to libusb, libusb1 or openusb. It is unable to find the libusb within the environment. I did a little tracking and found that ultimately, the find_library() function is found in ctypes in util.py. It refers to the executable path for MacOS with #executable_path/../lib/libusb%s..... I tried to put libusb into a folder on my executable path to hopefully match this functions search, and still got the same errors. I then found instructions on inputting a custom path for the backend for pyusb here. This appears to be a method where you input the device and backend information at the beginning of your program. The code I inserted is as follows:
import usb.core
import usb.backend.libusb1 as libusb1
backend = libusb1.get_backend(find_library=lambda x: "/path/to/file/lib/libusb-1.0.0.dylib")
dev = usb.core.find(idVendor=“MyVID”, idProduct=“MyPID”, backend=backend)
This induced a similar error but with a different traceback when I placed the code in rflib.init and the rfcat codes:
Traceback (most recent call last):
File "/opt/anaconda3/envs/rftools/bin/rfcat", line 4, in <module>
__import__('pkg_resources').run_script('rfcat==1.9.5', 'rfcat')
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/pkg_resources/__init__.py", line 666, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1469, in run_script
exec(script_code, namespace, namespace)
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/rfcat-1.9.5-py2.7.egg/EGG-INFO/scripts/rfcat", line 12, in <module>
File "build/bdist.macosx-10.7-x86_64/egg/rflib/__init__.py", line 15, in <module>
File "/opt/anaconda3/envs/rftools/lib/python2.7/site-packages/usb/core.py", line 1304, in find
raise NoBackendError('No backend available')
usb.core.NoBackendError: No backend available
I have since reset things back to how I started and am still getting the original error listed above.
I think this largely has to do with the anaconda environment, which I can of course remove. I want to try to find a way to make this work though. Is there a better method to help rfcat find the libusb as required? Another possible solution is resolving the actual executable_path. Does anyone know how to find the executable_path?
In case it helps, I will list the default locations for commands and files.
rfcat: /opt/anaconda3/envs/rftools/lib/python2.7/site-packages/rfcat
pyusb: /opt/anaconda3/envs/rftools/lib/python2.7/site-packages/usb
ctypes: /opt/anaconda3/envs/rftools/lib/python2.7/ctypes
libusb: /opt/anaconda3/envs/rftools/lib/python2.7/site-packages/usb/lib
Pretty new to both scapy and python so apologies for what may be a thickheaded question.
I know that it is new and may have issues on Windows but I have successfully installed scapy3 on Windows 2012r2 and Ubuntu Linux. Unfortunately, I actually hope to use it on Windows 7 and am getting the following error message:
Traceback (most recent call last):
File "C:\Python35\Scripts\\scapy", line 25, in <module>
interact()
File "C:\Python35\lib\site-packages\scapy\main.py", line 293, in interact
scapy_builtins = __import__("scapy.all",globals(),locals(),".").__dict__
File "C:\Python35\lib\site-packages\scapy\all.py", line 16, in <module>
from .arch import *
File "C:\Python35\lib\site-packages\scapy\arch\__init__.py", line 95, in <module>
from .windows import *
File "C:\Python35\lib\site-packages\scapy\arch\windows\__init__.py", line 200, in <module>
ifaces.load_from_powershell()
File "C:\Python35\lib\site-packages\scapy\arch\windows\__init__.py", line 151, in load_from_powers
hell
for i in get_windows_if_list():
File "C:\Python35\lib\site-packages\scapy\arch\windows\__init__.py", line 86, in get_windows_if_list
name, value = [ j.strip() for j in i.split(':') ]
ValueError: too many values to unpack (expected 2)
I have searched via google and on stackoverflow but have not found a solution.
Any guidance appreciated.
platform is Windows 7 and python35
Late answer: you are using a fork of scapy that does not officially supports windows 7.
Since very recently, the original secdev/scapy fork supports Python 3, so there is need to keep using the one not supporting windows 7 :-)
Feel free to have a look at
https://github.com/secdev/scapy
Traceback (most recent call last):
File "tornado_runner.py", line 18, in <module>
main()
File "tornado_runner.py", line 15, in main
IOLoop.instance().start()
File "C:\Python27\lib\site-packages\tornado\ioloop.py", line 858, in start
event_pairs = self._impl.poll(poll_timeout)
File "C:\Python27\lib\site-packages\tornado\platform\select.py", line 63, in poll
self.read_fds, self.write_fds, self.error_fds, timeout)
select.error: (10038, 'An operation was attempted on something that is not a socket
it looks like the issue was solved for a while now, https://github.com/tornadoweb/tornado/issues/1360
But for last few days i started to see a lot of such errors in production windows environment. Does anyone have a clue?
I got the same issue with yours, and I searched it everywhere but can't find any solution in code layer.
Finally, I tried to reset winsock in my Windows, by running "netsh winsock reset" in command line, and it works.
Hi text mining champions,
I'm using Anaconda with NLTK v3.2 on Windows 10.(client's environment)
When I try to POS tag, I keep getting a URLLIB2 error:
URLError: <urlopen error unknown url type: c>
It seems urllib2 is unable to recognize windows paths? How can I work around this?
The command is simple as:
nltk.pos_tag(nltk.word_tokenize("Hello World"))
edit:
There is a duplicate question, however I think the answers obtained here by manan and alvas are a better fix.
EDITED
This issue has been resolved from NLTK v3.2.1. Upgrading your NLTK version would resolve the issue, e.g. pip install -U nltk.
I faced the same issue and the error encountered was as follows;
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\nltk-3.2-py2.7.egg\nltk\tag\__init__.py", line 110, in pos_tag
tagger = PerceptronTagger()
File "C:\Python27\lib\site-packages\nltk-3.2-py2.7.egg\nltk\tag\perceptron.py", line 141, in __init__
self.load(AP_MODEL_LOC)
File "C:\Python27\lib\site-packages\nltk-3.2-py2.7.egg\nltk\tag\perceptron.py", line 209, in load
self.model.weights, self.tagdict, self.classes = load(loc)
File "C:\Python27\lib\site-packages\nltk-3.2-py2.7.egg\nltk\data.py", line 801, in load
opened_resource = _open(resource_url)
File "C:\Python27\lib\site-packages\nltk-3.2-py2.7.egg\nltk\data.py", line 924, in _open
return urlopen(resource_url)
File "C:\Python27\lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 391, in open
response = self._open(req, data)
File "C:\Python27\lib\urllib2.py", line 414, in _open
'unknown_open', req)
File "C:\Python27\lib\urllib2.py", line 369, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 1206, in unknown_open
raise URLError('unknown url type: %s' % type)
urllib2.URLError: <urlopen error unknown url type: c>
The URLError that you mentioned was due to a bug in the perceptron.py file within the NLTK library for Windows.
In my machine, the file is at this location
C:\Python27\Lib\site-packages\nltk-3.2-py2.7.egg\nltk\tag\perceptron.py
(Basically look at an equivalent location within yours wherever you have the Python27 folder)
The bug was basically in the code to find the corresponding location for the averaged_perceptron_tagger within your machine. One can have a look at the line 801 and 924 mentioned in the data.py file regarding this.
I think the NLTK developer community recently fixed this bug in the code. Have a look at this commit made to their code a few days back.
https://github.com/nltk/nltk/commit/d3de14e58215beebdccc7b76c044109f6197d1d9#diff-26b258372e0d13c2543de8dbb1841252
The snippet where the change was made is as follows;
self.tagdict = {}
self.classes = set()
if load:
AP_MODEL_LOC = 'file:'+str(find('taggers/averaged_perceptron_tagger/'+PICKLE))
self.load(AP_MODEL_LOC)
# Initially it was:AP_MODEL_LOC = str(find('taggers/averaged_perceptron_tagger/'+PICKLE))
def tag(self, tokens):
Updating the file to the most recent commit worked for me and was able to use the nltk.pos_tag command. I believe this would resolve your problem as well (assuming you have everything else set up).
EDITED
This issue has been resolved from NLTK v3.2.1. Please upgrade your NLTK!
First read #MananVyas answer for the why:
https://stackoverflow.com/a/35902494/610569
Here's the how, without downgrading to NLTK v3.1, using NLTK 3.2, you can use this "hack":
>>> from nltk.tag import PerceptronTagger
>>> from nltk.data import find
>>> PICKLE = "averaged_perceptron_tagger.pickle"
>>> AP_MODEL_LOC = 'file:'+str(find('taggers/averaged_perceptron_tagger/'+PICKLE))
>>> tagger = PerceptronTagger(load=False)
>>> tagger.load(AP_MODEL_LOC)
>>> pos_tag = tagger.tag
>>> pos_tag('The quick brown fox jumps over the lazy dog'.split())
[('The', 'DT'), ('quick', 'JJ'), ('brown', 'NN'), ('fox', 'NN'), ('jumps', 'VBZ'), ('over', 'IN'), ('the', 'DT'), ('lazy', 'JJ'), ('dog', 'NN')]
I faced the same issue a while back.
Solution:
nltk.download('averaged_perceptron_tagger')
I'm trying to setup the chromium code following the documentation on Mac OS X 10.9.2.
I could successfully fetch the code with command:
fetch --nohooks chromium --nosvn=True
but when I try to sync the projects with gclient sync command it's breaking in the middle of the process throwing the following OSError:
________ running '/usr/bin/python src/build/download_nacl_toolchains.py --no-arm-trusted --keep' in '/Volumes/NJHD/google'
Updating /Volumes/NJHD/google/src/native_client/toolchain/.tars/toolchain_mac_x86.tar.bz2
from https://storage.googleapis.com/nativeclient-archive2/x86_toolchain/r12790/toolchain_mac_x86.tar.bz2.
.....................................................................................
|------------------------------------------------|
..................................................Traceback (most recent call last):
File "src/build/download_nacl_toolchains.py", line 63, in <module>
sys.exit(Main(sys.argv[1:]))
File "src/build/download_nacl_toolchains.py", line 58, in Main
download_toolchains.main(args)
File "/Volumes/NJHD/google/src/native_client/build/download_toolchains.py", line 414, in main
keep=options.keep, verbose=options.verbose):
File "/Volumes/NJHD/google/src/native_client/build/download_toolchains.py", line 263, in SyncFlavor
tar.Extract()
File "/Volumes/NJHD/google/src/native_client/build/cygtar.py", line 313, in Extract
self.tar.extract(m)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 2084, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 2168, in _extract_member
self.makelink(tarinfo, targetpath)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 2252, in makelink
os.link(tarinfo._link_target, targetpath)
OSError: [Errno 45] Operation not supported
Error: Command /usr/bin/python src/build/download_nacl_toolchains.py --no-arm-trusted --keep returned non-zero exit status 1 in /Volumes/NJHD/google
Hook '/usr/bin/python src/build/download_nacl_toolchains.py --no-arm-trusted --keep' took 89.91 secs
It seems to me that it is complaining about os.link(tarinfo._link_target, target path), so I tried creating a link using that function which works fine.
Is there any other configuration that I need to take care of?
Thanks in advance!
I placed the chromium project on an external hard disk as you did, and I got the same error.
Perhaps you should try the syncing stuff in your internal drive.
Haven't tried. Hope that help.