I am new with celery and i am following the tutorial given on their site i got this error
from celery import Celery
app = Celery('tasks', broker='pyamqp://guest#localhost//')
#app.task
def add(x, y):
return x + y
and cmd shows error like this
-------------- celery#DESKTOP-O90R45G v4.0.2 (latentcall)
---- **** -----
--- * *** * -- Windows-10-10.0.14393 2016-12-16 20:05:48
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x4591950
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- -------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks] . tasks.add
[2016-12-16 20:05:49,029: CRITICAL/MainProcess] Unrecoverable error:
TypeError('argument 1 must be an integer, not _subprocess_handle',)
Traceback (most recent call last): File
"c:\python27\lib\site-packages\celery\worker\worker.py", line 203, in
start
self.blueprint.start(self) File "c:\python27\lib\site-packages\celery\bootsteps.py", line 119, in
start
step.start(parent) File "c:\python27\lib\site-packages\celery\bootsteps.py", line 370, in
start
return self.obj.start() File "c:\python27\lib\site-packages\celery\concurrency\base.py", line 131,
in start
self.on_start() File "c:\python27\lib\site-packages\celery\concurrency\prefork.py", line
112, in on_start
**self.options) File "c:\python27\lib\site-packages\billiard\pool.py", line 1008, in
__init__
self._create_worker_process(i) File "c:\python27\lib\site-packages\billiard\pool.py", line 1117, in
_create_worker_process
w.start() File "c:\python27\lib\site-packages\billiard\process.py", line 122, in
start
self._popen = self._Popen(self) File "c:\python27\lib\site-packages\billiard\context.py", line 383, in
_Popen
return Popen(process_obj) File "c:\python27\lib\site-packages\billiard\popen_spawn_win32.py", line
64, in __init__
_winapi.CloseHandle(ht) TypeError: argument 1 must be an integer, not _subprocess_handle Traceback (most recent call last): File
"<string>", line 1, in <module> File
"c:\python27\lib\site-packages\billiard\spawn.py", line 159, in
spawn_main
new_handle = steal_handle(parent_pid, pipe_handle) File "c:\python27\lib\site-packages\billiard\reduction.py", line 126, in
steal_handle
_winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE) >WindowsError: [Error 6] The handle is invalid
Celery is not supported anymore in Windows since the 4.0, as stated in their Readme:
Celery is a project with minimal funding, so we don't support Microsoft Windows. Please don't open any issues related to that platform.
Unfortunately, this bug seems to be one of the side effects (support for process handles removed)
Your best bet is to downgrade celery, remove it first then :
pip install celery==3.1.18
Looks like your rabbitmq server is not running. Did you run install and start rabbimq server?
You can checkout rabbitmq docs and install it. Start rabbitmq server and then start your celery worker with your app
celery worker -l info -A tasks
Related
I am trying to read core dump info that will be saved in flash.
i am first reading the coredump paritition:
esptool.py --port /dev/ttyUSB0 read_flash 0x3ec000 0xE000 ./core.bin
Then I am trying to extract the info using:
espcoredump.py info_corefile --core ./core.bin --core-format raw build/httpsOTA_S_FE.elf
But the script is giving me the following error:
`
espcoredump.py v0.4-dev
===============================================================
==================== ESP32 CORE DUMP START ====================
Traceback (most recent call last):
File "/home/praveen/opt/esp/idf-4.4.2/esp-idf/components/espcoredump/espcoredump.py", line 350, in <module>
temp_core_files = info_corefile()
File "/home/praveen/opt/esp/idf-4.4.2/esp-idf/components/espcoredump/espcoredump.py", line 170, in info_corefile
gdb = EspGDB(gdb_tool, [rom_sym_cmd], core_elf_path, args.prog, timeout_sec=args.gdb_timeout_sec)
File "/home/praveen/opt/esp/idf-4.4.2/esp-idf/components/espcoredump/corefile/gdb.py", line 45, in __init__
self._gdbmi_run_cmd_get_responses(cmd='-data-list-register-values x pc',
File "/home/praveen/opt/esp/idf-4.4.2/esp-idf/components/espcoredump/corefile/gdb.py", line 63, in _gdbmi_run_cmd_get_responses
more_responses = self.p.get_gdb_response(timeout_sec=0, raise_error_on_timeout=False)
File "/home/praveen/.espressif/python_env/idf4.4_py3.10_env/lib/python3.10/site-packages/pygdbmi/gdbcontroller.py", line 269, in get_gdb_response
self.verify_valid_gdb_subprocess()
File "/home/praveen/.espressif/python_env/idf4.4_py3.10_env/lib/python3.10/site-packages/pygdbmi/gdbcontroller.py", line 175, in verify_valid_gdb_subprocess
raise NoGdbProcessError(
pygdbmi.gdbcontroller.NoGdbProcessError: gdb process has already finished with return code: 127
`
My end goal is to get send this partition data to my server and debug the info on server. I first wanted to see the process on the host machine and then implement same on my server.
espcoredump.py info_corefile --core ./core.bin --core-format raw build/httpsOTA_S_FE.elf
is supposed to print the core dump info.
I tried the same with a litle older version of idf IDF-4.2.4 and i get following error:
`
espcoredump.py v0.4-dev
===============================================================
==================== ESP32 CORE DUMP START ====================
Traceback (most recent call last):
File "/home/praveen/opt/esp/idf-4.2.4/esp-idf/components/espcoredump/espcoredump.py", line 1818, in <module>
main()
File "/home/praveen/opt/esp/idf-4.2.4/esp-idf/components/espcoredump/espcoredump.py", line 1813, in main
operation_func(args)
File "/home/praveen/opt/esp/idf-4.2.4/esp-idf/components/espcoredump/espcoredump.py", line 1644, in info_corefile
p,task_name = gdbmi_freertos_get_task_name(p, extra_info[0])
File "/home/praveen/opt/esp/idf-4.2.4/esp-idf/components/espcoredump/espcoredump.py", line 1543, in gdbmi_freertos_get_task_name
p,res = gdbmi_data_evaluate_expression(p, "(char*)((TCB_t *)0x%x)->pcTaskName" % tcb_addr)
File "/home/praveen/opt/esp/idf-4.2.4/esp-idf/components/espcoredump/espcoredump.py", line 1539, in gdbmi_data_evaluate_expression
p = gdbmi_cmd_exec(p, handlers, "-data-evaluate-expression \"%s\"" % expr)
File "/home/praveen/opt/esp/idf-4.2.4/esp-idf/components/espcoredump/espcoredump.py", line 1493, in gdbmi_cmd_exec
p.stdin.write(bytearray("%s\n" % gdbmi_cmd, encoding='utf-8'))
BrokenPipeError: [Errno 32] Broken pipe
`
I seem to have some problems let python read key event, I wrote this piece of code
for recording while i have space down and stop when i've release it..
import pyaudio
import wave
import keyboard
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44100
RECORD_SECONDS = 5
WAVE_OUTPUT_FILENAME = "output.wav"
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
print("* recording")
frames = []
while keyboard.is_pressed('space'):
data = stream.read(CHUNK)
frames.append(data)
print("* done recording")
stream.stop_stream()
stream.close()
p.terminate()
wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
wf.close()
But when running this code, I get this error message.
python sound_record.py
* recording
Traceback (most recent call last):
File "sound_record.py", line 24, in <module>
while keyboard.is_pressed('space'):
File "/usr/local/lib/python2.7/site-packages/keyboard/__init__.py", line 162, in is_pressed
_listener.start_if_necessary()
File "/usr/local/lib/python2.7/site-packages/keyboard/_generic.py", line 36, in start_if_necessary
self.init()
File "/usr/local/lib/python2.7/site-packages/keyboard/__init__.py", line 112, in init
_os_keyboard.init()
File "/usr/local/lib/python2.7/site-packages/keyboard/_nixkeyboard.py", line 110, in init
build_device()
File "/usr/local/lib/python2.7/site-packages/keyboard/_nixkeyboard.py", line 106, in build_device
ensure_root()
File "/usr/local/lib/python2.7/site-packages/keyboard/_nixcommon.py", line 163, in ensure_root
raise ImportError('You must be root to use this library on linux.')
ImportError: You must be root to use this library on linux.
And when i do it using sudo:
sudo !!
sudo python sound_record.py
Password:
* recording
Traceback (most recent call last):
File "sound_record.py", line 24, in <module>
while keyboard.is_pressed('space'):
File "/usr/local/lib/python2.7/site-packages/keyboard/__init__.py", line 162, in is_pressed
_listener.start_if_necessary()
File "/usr/local/lib/python2.7/site-packages/keyboard/_generic.py", line 36, in start_if_necessary
self.init()
File "/usr/local/lib/python2.7/site-packages/keyboard/__init__.py", line 112, in init
_os_keyboard.init()
File "/usr/local/lib/python2.7/site-packages/keyboard/_nixkeyboard.py", line 110, in init
build_device()
File "/usr/local/lib/python2.7/site-packages/keyboard/_nixkeyboard.py", line 107, in build_device
device = aggregate_devices('kbd')
File "/usr/local/lib/python2.7/site-packages/keyboard/_nixcommon.py", line 141, in aggregate_devices
uinput = make_uinput()
File "/usr/local/lib/python2.7/site-packages/keyboard/_nixcommon.py", line 27, in make_uinput
uinput = open("/dev/uinput", 'wb')
IOError: [Errno 1] Operation not permitted: '/dev/uinput'
So why am I getting this error message?
You appear to be using the Python package keyboard, whose description is:
Hook and simulate keyboard events on Windows and Linux
If you want to do work with keyboard events on MacOS, you'll need to find a package that does that.
i have Win7-32bit, PyCharm 2016.1.
Python 2.7.10, pytest-3.0.3, py-1.4.31, pluggy-0.4.0
I start pytest in PyCharm
tests.py .Exception in thread MyName:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\Users\user\PycharmProjects\test-py.test-on-linux-and-windows\background_worker.py", line 23, in task_manager
func(*args, **kwargs)
TypeError: t1() takes exactly 1 argument (0 given)
When i permutation test__is_returned_a_thread and test__thread_works_in_background - no errors.
Source code https://github.com/patsevanton/test-py.test-on-linux-and-windows
I can't deploy HDP stack(hdfs, yarn, spark .etc) via Ambari 2.2.1.1. I got this error:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py", line 35, in
BeforeAnyHook().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py", line 29, in hook
setup_users()
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py", line 43, in setup_users
fetch_nonlocal_groups = params.fetch_nonlocal_groups
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 149, in init
self.arguments[key] = arg.validate(value)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 56, in validate
if not value in (True, False):
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in getattr
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'fetch_nonlocal_groups' was not found in configurations dictionary!
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-387.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-387.json', 'INFO', '/var/lib/ambari-agent/tmp']
Did someone see this error before? Thanks in advance!
I'm trying to learn Scrapy, and I'm running it on Mac OS X 10.11.2.
I'm following the tutorial, I've download the tutorial files and created a new Spider file as described here: http://doc.scrapy.org/en/1.0/intro/tutorial.html
When I try to run the spider, I get the following exception:
2015-12-11 19:04:05 [scrapy] INFO: Scrapy 1.0.3 started (bot: tutorial)
2015-12-11 19:04:05 [scrapy] INFO: Optional features available: ssl, http11
2015-12-11 19:04:05 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
2015-12-11 19:04:06 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
Unhandled error in Deferred:
2015-12-11 19:04:06 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 153, in crawl
d = crawler.crawl(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 71, in crawl
self.engine = self._create_engine()
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 83, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 64, in __init__
self.scheduler_cls = load_object(self.settings['SCHEDULER'])
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 44, in load_object
mod = import_module(module)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python2.7/site-packages/scrapy/core/scheduler.py", line 6, in <module>
from queuelib import PriorityQueue
File "/usr/local/lib/python2.7/site-packages/queuelib/__init__.py", line 1, in <module>
from queuelib.queue import FifoDiskQueue, LifoDiskQueue
File "/usr/local/lib/python2.7/site-packages/queuelib/queue.py", line 5, in <module>
import sqlite3
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sqlite3/__init__.py", line 24, in <module>
from dbapi2 import *
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sqlite3/dbapi2.py", line 28, in <module>
from _sqlite3 import *
exceptions.ImportError: No module named _sqlite3
2015-12-11
19:04:06 [twisted] CRITICAL:
Initially I tought that I had a problem in my sqlite installation but it seems to be working if I run the command sqlite3 from the command line. Any ideas?
Apparently this iss a known homebrew problem, see here
I think you should reinstall homebrew's sqlite and python