How can I get the status of non-blocking jobs of gearman submit_multiple_jobs? - nonblocking

how can I tell if a background job or a non-blocking request by gearman client is successful or not?
while (True):
jobs = getJobs()
submitted_requests = gm_client.submit_multiple_jobs(jobs, background = False, wait_until_complete = False)
# check status in a non-blocking mode

You can refer to this link
here is the snippet
completed_requests = gm_client.wait_until_jobs_completed(submitted_requests, poll_timeout=5.0)
for completed_job_request in completed_requests:
check_request_status(completed_job_request)
check_request_status is defined in the link.

Related

Dask : what is the asyncio equivalent of as_completed?

I have a working Dask client code like that :
client = Client(address=self.cluster)
futures = []
for job in jobs:
future = client.submit(...)
futures.append(future)
for future, result in as_completed(futures, with_results=True, raise_errors=True):
key = future.key
state = (State.FINISHED if result is True else State.FAILED)
...
The Dask as_completed function is relevant, because it iterate on job that have finished with the good order.
The problem with that code, is it may block indefinitely on the as_completed call, in case of the workers are not available for instance.
Is there a way to rewrite it with asyncio ? Indeed, with asyncio, I may use the wait function with a timeout, in order to unblock blocking call, in case of errors.
Thank you
You can use asyncio.as_completed https://docs.python.org/3/library/asyncio-task.html

bot.get_all_channels() gets ignored discord.py

When obtaining all channels to send a message to all, the bot ignores the command. Here's my code.
async def lockdown(ctx):
allchannels = bot.get_all_channels()
overwrite = channel.overwrites_for(ctx.guild.default_role)
locked = overwrite.send_messages = False
await locked.send(allchannels, 'This server has been locked down.')
Try printing allchannel, you'll see where you did an error.
You can't use bot.get_all_channels() in this way that's all

How do I avoid the loop argument

The following code is part of some automated tests that I have written in python 3.6:
connected = False
def aiohttp_server(loop):
async def handler(msg, session):
global connected
if msg.type == sockjs.MSG_OPEN:
connected = True
if msg.type == sockjs.MSG_CLOSE:
connected = False
app = web.Application(loop=loop)
sockjs.add_endpoint(app, handler)
runner = web.AppRunner(app)
return runner
def run_server(runner, loop):
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s')
asyncio.set_event_loop(loop)
loop.run_until_complete(runner.setup())
site = web.TCPSite(runner, 'localhost', 8080)
loop.run_until_complete(site.start())
loop.run_forever()
def start_server():
loop = asyncio.new_event_loop()
t = threading.Thread(target=run_server, args=(aiohttp_server(loop),loop,), daemon=True)
t.start()
time.sleep(0.01)
Basically, calling start_server should initiate a simple web server with a sockjs endpoint named /sockjs
I am not yet a master of python's async keyword. There are two issues, that I suspect are related:
Firstly, I am getting a deprecation warning on the app = web.Application(loop=loop) statement:
/home/peter/incubator/sockjs_client/tests/test_sockjs_client.py:25: DeprecationWarning: loop argument is deprecated
app = web.Application(loop=loop)
/home/peter/.local/lib/python3.6/site-packages/sockjs/route.py:54: DeprecationWarning: loop property is deprecated
manager = SessionManager(name, app, handler, app.loop)
And secondly, the tests fail occasionally. I believe that, depending on machine load, sometimes the server hasn't had enough time to start before the test code actually starts executing.
Basically, what I need is for the start_server function to initialise a web application with a websocket endpoint, and not return until the application is prepared to accept websocket connections.
Firstly, I am getting a deprecation warning on the app = web.Application(loop=loop) statement:
The recommended way to avoid passing around the loop everywhere is to switch to asyncio.run. Instead of managing the loop manually, let asyncio.run create (and close) the loop for you. If all your work is done in coroutines, you can access the loop with get_event_loop() or get_running_loop().
Basically, what I need is for the start_server function to initialise a web application with a websocket endpoint, and not return until the application is prepared to accept websocket connections.
You can pass a threading.Event to the thread that gets set when the site is set up, and wait for it in the main thread.
Here is an (untested) example that implements both suggestions:
connected = False
def aiohttp_server():
async def handler(msg, session):
global connected
if msg.type == sockjs.MSG_OPEN:
connected = True
if msg.type == sockjs.MSG_CLOSE:
connected = False
app = web.Application()
sockjs.add_endpoint(app, handler)
return web.AppRunner(app)
async def run_server(ready):
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s')
runner = aiohttp_server()
await runner.setup()
site = web.TCPSite(runner, 'localhost', 8080)
await site.start()
ready.set()
# emulates loop.run_forever()
await asyncio.get_running_loop().create_future()
def start_server():
ready = threading.Event()
threading.Thread(target=asyncio.run, args=(aiohttp_server(ready),),
daemon=True).start()
ready.wait()
Please upgrade sockjs to the newest version.
It doesn't require passing the loop anymore.

Daemon-kit process one amqp job at a time

We've used daemon-kit to create a amqp worker which should receive a job and then ask for a new job, but not before the first job is finished. The problem is that Daemon Kit forkes the job and immediately starts a new job if there is one in the RabbitMQ queue.
Is there a formal way to force one-job-at-a-time-behaviour in daemon-kit? Or how can we achieve this?
This is a short version of how we start the amqp worker and process jobs. When a job finishes with a result it publishes this back to the RabbitMQ server.
# Run an event-loop for processing
DaemonKit::AMQP.run do |connection|
connection.on_tcp_connection_loss do |client, settings|
DaemonKit.logger.debug("AMQP connection status changed: #{client.status}")
client.reconnect(false, 1)
end
amq = AMQP::Channel.new
amq.queue(engine_key).subscribe do |metadata,msg|
msg_decode = JSON.parse(msg)
job = REFxEngineRunnerAPI10.new msg_decode
result = job.run(metadata.correlation_id)
amq.queue( metadata.reply_to, :auto_delete => false)
xc = amq.default_exchange
xc.publish JSON.dump(result), :routing_key => metadata.reply_to, :correlation_id => metadata.correlation_id
end
end
UPDATE
I found this to work for us:
DaemonKit::AMQP.run do |connection|
amq = AMQP::Channel.new(connection, prefetch: 1)
# I needs this extra line because I use RabbitMQ new than version 2.3.6
amq.qos(0, 1)
# be sure to add (:ack => true)
amq.queue(engine_key).subscribe(:ack => true) do |metadata,msg|
#### run long job one at a time
# Tell RabbitMQ I finished the job and I can now receive a new job
metadata.ack
end
end
I'm taking a stab in the dark here, since this sounds to me exactly how the protocol should behave. You can however using QoS or prefetching to limit the number of messages sent down to a subscriber from the broker using something like this:
amq = AMQP::Channel.new(connection, prefetch: 1)
According to the example this should give you the behaviour your desire.

session error when using multiple uwsgi worker and beaker session.typ is memory

i'm running a pyramid webapp, using velruse to make OAuth. if running the app alone, it succeeded.
but if running with uwsgi multiple and set session.type = memory.
request.session will not contain necessary token info when callback from oauth.
production.ini:
session.type = memory
session.data_dir = %(here)s/data/sessions/data
session.lock_dir = %(here)s/data/sessions/lock
session.key = mykey
session.secret = mysecret
[uwsgi]
socket = 127.0.0.1:6543
master = true
workers = 8
max-requests = 65536
debug = false
autoload = true
virtualenv = /home/myname/my_env
pidfile = ./uwsgi.pid
daemonize = ./mypyramid-uwsgi.log
If you use memory as session store only the worker in which the session data has been written will be able to use that infos. You should use another sessione store (that can be shared by all of the workers/processes)
your uWSGI config is not clear (it looks like it only contains the socket option). Can you re-paste it ?

Resources