object has no attribute when using tatsu API - discord.py

I am going to create a discordpy bot that can check users server credit of Tatsu bot
I use Tatsu API to get user's credit, but it has problem that is 'object has no attribute 'credits''. It also appears when I use avatar_url, avatar_hash,...
This is Tatsu library: https://github.com/PumPum7/Tatsu.py
#commands.command()
async def transfer(self,ctx, member: discord.Member):
wrapper = ApiWrapper(key=os.environ['token'])
user_profile = await wrapper.get_profile(member.id)
await ctx.send(user_profile.credits)

I've taken a look at the source code of the library (it's a really bad one to be honest). It seems that when an internal exception is throw, instead of raising and propagating it, the author decided to return it (exact lines are here), I have no idea what the author wanted to do with that, nonetheless you can use a simple if-statement to check if the method didn't return an error:
#commands.command()
async def transfer(self,ctx, member: discord.Member):
wrapper = ApiWrapper(key=os.environ['token'])
user_profile = await wrapper.get_profile(member.id)
if not isinstance(user_profile, Exception):
await ctx.send(user_profile.credits)
else:
exc = user_profile
print(f"An error happened:\n{exc.__class__.__name__}: {exc}")

Related

How do I control an event loop?

I can't figure out how to handle an event loop such that I can run other code concurrently. I want to make it so when the handler receives data, it prints it without effecting anything else the program is doing.
I have tried wrapping trading_stream.run in an asyncio task, but this produces an error and isn't what I really want. It's like once I run the stream, my program is stuck inside the update_handler function.
from alpaca.trading.stream import TradingStream
trading_stream = TradingStream('api-key', 'secret-key', paper=True)
async def update_handler(data):
# trade updates will arrive in our async handler
print(data)
# subscribe to trade updates and supply the handler as a parameter
trading_stream.subscribe_trade_updates(update_handler)
# start our websocket streaming
trading_stream.run()
Premise: it would probably be best to understand what event loop is TradingStream using and, if possible, schedule tasks on that loop once retrieved, e.g.
trading_stream = TradingStream('api-key', 'secret-key', paper=True)
evt_loop = trading_stream.some_evt_loop_getter()
evt_loop.create_task(my_concurrent_task)
if TradingStream is using asyncio.get_event_loop() under the hood, then the following is also possible.
import asycio
trading_stream = TradingStream('api-key', 'secret-key', paper=True)
evt_loop = asyncio.get_event_loop()
evt_loop.create_task(my_concurrent_task)
Not being able to assess whether either of the above is the case, the following hack does solve your problem, but I would not resort to this unless the alternatives are not viable.
OTHER_LOGIC_FLAG = True
async def my_other_async_logic():
# Concurrent logic here
async def update_handler(data):
global OTHER_LOGIC_FLAG
if OTHER_LOGIC_FLAG:
asyncio.create_task(my_other_async_logic()
OTHER_LOGIC_FLAG = False
# trade updates will arrive in our async handler
print(data)
Again, do try to get a handle to the event loop first.

Trying to make a bot that sends a message for specific user ids

So I'm trying to make a bot that sends a message when somebody with a specific user id sends a message, but when I use it, it ends up spamming the message for every other user id, excluding the specific one. Here's my code.
#client.event
async def on_message(message):
if message.author.id != 206883079837450241:
await message.channel.send('example')
Switch the != with ==. != passes True only if the message's author is not yours. == does the opposite.

Discord python bot creation: clear message command that shows how many messages were deleted

I am in need of help to make bot respond to me with an exact number of how many messages were cleared.
example: When I type !clear 15 and it deletes 12 message I want it to respond me with "12 messages cleared"
I tried something like this:
#commands.command()
async def clear(self, ctx, amount=10):
await ctx.channel.purge(limit=amount)
await ctx.send(amount, 'messages cleared')
I know this wouldn't show how many messages it cleared but how many it tried to clear (of course if it did work that is what would happen but it didn't work as expected)
I searched a lot about this but I could not find any similar posts anywhere so hope someone can help me.
TextChannel.purge returns a list of messages that were deleted, you can simply use len()
deleted_messages = await ctx.channel.purge(limit=amount)
await ctx.send(f"{len(deleted_messages) - 1} messages cleared") # Subtracting 1 as it's the message of the command
Also you should typehint the amount arg so it's automatically converted to an integer
async def purge(self, ctx, amount: int = 10):

When and how to use asyncio queues?

I have multiple api routes which return data by querying database individually.
Now I'm trying to build dashboard which queries above api's. How should I put api calls in the queue so that they are executed asynchronously?
I tried
await queue.put({'response_1': await api_1(**kwargs), 'response_2': await api_2(**kwargs)})
It seems as though data is returned while task is being put in the queue.
Now I'm using
await queue.put(('response_1', api_1(**args_dict)))
in producer and in consumer I'm parsing tuple and making api calls which I think I'm doing wrong .
Question1
Is there a better way to do it?
This is code I'm using to create tasks
producers = [create_task(producer(**args_dict, queue)) for row in stats]
consumers = [create_task(consumer(queue)) for row in stats]
await gather(*producers)
await queue.join()
for con in consumers:
con.cancel()
Question2 Should I use create_task or ensure_future? Sorry if it's repetitive but I can't understand the difference and after searching online I became more confused.
I'm using FastAPI, databases(async) packages.
I'm using tuple instead of dictionary like await queue.put('response_1', api_1(**kwargs))
./app/dashboard.py:90: RuntimeWarning: coroutine 'api_1' was never awaited
item: Tuple = await queue.get_nowait()
My code for consumer is
async def consumer(return_obj: dict, que: Queue):
item: Tuple = await queue.get_nowait()
print(f'consumer took {item[0]} from queue')
return_obj.update({f'{item[0]}': await item[1]})
await queue.task_done()
if I don't use get_nowait consumer gets stuck because queue may be empty,
but if I use get_nowait above error is shown.
I haven't defined max queue length
-----------EDIT-----------
Producer
async def producer(queue: Queue, **kwargs):
await queue.put('response_1', api_1(**kwargs))
You can drop the await from your first snippet and send the coroutine object in the queue. A coroutine object is a coroutine that was called, but not yet awaited.
# producer:
await queue.put({'response_1': api_1(**kwargs),
'response_2': api_2(**kwargs)})
...
# consumer:
while True:
dct = await queue.get()
for name, api_coro in dct:
result = await api_coro
print('result of', name, ':', result)
Should I use create_task or ensure_future?
If the argument is the result of invoking a coroutine function, you should use create_task (see this comment by Guido for explanation). As the name implies, it will return a Task instance that drives that coroutine. The task can also be awaited, but it continues to run in the background.
ensure_future is a much more specialized function that converts various kinds of awaitable objects to their corresponding futures. It is useful when implementing functions like asyncio.gather() which accept different kinds of awaitable objects for convenients, and need to convert them into futures before working with them.

asyncio.gather with selective return_exceptions

I want for asyncio.gather to immediately raise any exception except for some particular exception class, which should be instead returned in the results list. Right now, I just slightly modified the canonical implementation of asyncio.gather in CPython and use that, but I wonder if there is not a more canonical way to do it.
You can implement such semantics using the more powerful asyncio.wait primitive and its return_when=asyncio.FIRST_EXCEPTION option:
async def xgather(*coros, allowed_exc):
results = {}
pending = futures = list(map(asyncio.ensure_future, coros))
while pending:
done, pending = await asyncio.wait(
pending, return_when=asyncio.FIRST_EXCEPTION)
for fut in done:
try:
results[fut] = fut.result()
except allowed_exc as e:
results[fut] = e
return [results[fut] for fut in futures]
The idea is to call wait until either all futures are done or an exception is observed. The exception is in turn either stored or propagated, depending on whether it matches allowed_exc. If all the results and allowed exceptions have been successfully collected, they are returned in the correct order, as with asyncio.gather.
The approach of modifying the implementation of asyncio.gather might easily fail on a newer Python version, since the code accesses private attributes of Future objects. Also, alternative event loops like uvloop could make their gather and wait more efficient, which would automatically benefit an xgather based on the public API.
Test code:
import asyncio
async def fail():
1/0
async def main():
print(await xgather(asyncio.sleep(1), fail(), allowed_exc=OSError))
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
When run, the code raises immediately, which is expected ZeroDivisionError doesn't match the allowed OSError exception. Changing OSError to ZeroDivisionError causes the code to sleep for 1 second and output [None, ZeroDivisionError('division by zero',)].

Resources