How can I make that a command has a cooldown
I have this code:
#commands.command
async def reward(self, ctx):
ctx.user.send("You claimed your reward")
Money.add(user.id, 50)
I want that the command can only used every 5minutes
I think this would work:
#commands.command
#commands.cooldown(1, 300, commands.BucketType.user)
async def reward(self, ctx):
ctx.user.send("You claimed your reward")
Money.add(user.id, 50)
You can do #commands.cooldown({rate}, {time},commands.BucketType.{type}).
The rate is the number of usages of the commands per basis (like if you put 2 for example, it will become 2 usage commands per basis)
And for the time, you can put any time you want, as long it's on seconds, if it's not a second like 5 min, convert the minutes to seconds (5m x 60sec = 500sec).
And lastly the commands.BucketType.{type} is the type based either on a per-server, per-channel, per-user, or global basis. there are 4 types of it:
BucketType.default for a global basis.
BucketType.user for a per-user basis.
BucketType.server for a per-server basis.
BucketType.channel for a per-channel basis.
You can find more about it Here
And for the updated code :
#commands.command
#commands.cooldown(1, 300, commands.BucketType.user)
async def reward(self, ctx):
ctx.user.send("You claimed your reward")
Money.add(user.id, 50)
Related
I made a discord economy bot. I want to include a loan function, where the user can ask for a loan. There would be a command cooldown. However, if they don't pay after the command cooldown has ended, the bot should automatically take the money.
#bot.command()
#commands.cooldown(1, 60*60*24*7, commands.BucketType.user)
async def loan(ctx, amount : int):
loan_available = int(client.get_user_bal(Devada, testbot)['cash'])
if int(amount) <= loan_available:
time.sleep(1)
await ctx.channel.send('You have been given ' + ''.join(str(amount) + ". You will have to pay " + str((int(amount)+int(amount)*0.1)) +" baguttes within 2 weeks."))
client.change_user_bal(str(ctx.guild.id), str(ctx.author.id), cash=0, bank=amount, reason='loan')
client.change_user_bal(str(ctx.guild.id), testbot, cash=-amount, bank=0, reason='loan')
must_pay.update({ctx.author.name:str(amount)})
else:
time.sleep(2)
await ctx.channel.send("You Can only request a loan within "+str(loan_available))
Is there any way to detect when the cooldown is over?
The #commands.cooldown attribute is used to add a cooldown to a command, so users can't spam that same command. Instead, they need to wait for a certain amount of time ( 60*60*24*7 seconds, in this case ) before being able to re-use the command.
However, if you want the bot to wait for 604800 seconds and then take the money back, you should use the asyncio module to wait for that amount of time without disturbing or stopping the other commands of the program. Here is how you can re-adjust your code:
import asyncio
#bot.command()
#commands.cooldown(1, 60*60*24*7, commands.BucketType.user)
async def loan(ctx, amount : int):
loan_available = int(client.get_user_bal(Devada, testbot)['cash'])
if int(amount) <= loan_available:
time.sleep(1)
await ctx.channel.send('You have been given ' + ''.join(str(amount) + ". You will have to pay " + str((int(amount)+int(amount)*0.1)) +" baguttes within 2 weeks."))
client.change_user_bal(str(ctx.guild.id), str(ctx.author.id), cash=0, bank=amount, reason='loan')
client.change_user_bal(str(ctx.guild.id), testbot, cash=-amount, bank=0, reason='loan')
must_pay.update({ctx.author.name:str(amount)})
else:
time.sleep(2)
await ctx.channel.send("You Can only request a loan within "+str(loan_available))
# New asyncio code
await asyncio.sleep(60*60*24*7) # Wait for 60*60*24*7 seconds
# Here, just add the code to take the money away after 60*60*24*7 seconds
Note that if you restart your bot between the time, the bot won't execute the code after
60*60*24*7 seconds. So you must keep the bot online without restarting it.
The correct way of checking if a command's cooldown has finished is either checking if Command.is_on_cooldown is false or even checking the time left on the command by using Command.get_cooldown_retry_after. You would need to check this every certain amount of time, so you could create a Task that would call any of those functions and then evaluate the result to do whatever you want.
I've put together a command that it takes a decent length of time for discord to execute:
#bot.command()
async def repeat_timer(ctx, *lines):
for line in lines:
await ctx.send(line)
time = "5s"
await ctx.send(time)
you could send $repeat_timer 1 2 3 4 5 6 7 8 9 10 then it will send each numeral back as a seperate message.
I would like to know the time between the loops begining and the final iteration completing executing. i.e. in the above example the message being 10 being posted to the channel.
The above shows the code i have working so far- but i can't see how you could set the timer to know when the task was complete
There's a ton of ways of doing this, probably the easiest and fastest is using the time.perf_counter function:
import time
#bot.command()
async def repeat_timer(ctx, *lines):
start = time.perf_counter()
for line in lines:
await ctx.send(line)
end = time.perf_counter()
total_time = end - start # In seconds
await ctx.send(total_time)
I am trying retrieve stock prices and process the prices them as they come. I am a beginner with concurrency but I thought this set up seems suited to an asyncio producers-consumers model in which each producers retrieve a stock price, and pass it to the consumers vial a queue. Now the consumers have do the stock price processing in parallel (multiprocessing) since the work is CPU intensive. Therefore I would have multiple consumers already working while not all the producers are finished retrieving data. In addition, I would like to implement a step in which, if the consumer finds that the stock price it's working on is invalid , we spawn a new consumer job for that stock.
So far, i have the following toy code that sort of gets me there, but has issues with my process_data function (the consumer).
from concurrent.futures import ProcessPoolExecutor
import asyncio
import random
import time
random.seed(444)
#producers
async def retrieve_data(ticker, q):
'''
Pretend we're using aiohttp to retrieve stock prices from a URL
Place a tuple of stock ticker and price into asyn queue as it becomes available
'''
start = time.perf_counter() # start timer
await asyncio.sleep(random.randint(4, 8)) # pretend we're calling some URL
price = random.randint(1, 100) # pretend this is the price we retrieved
print(f'{ticker} : {price} retrieved in {time.perf_counter() - start:0.1f} seconds')
await q.put((ticker, price)) # place the price into the asyncio queue
#consumers
async def process_data(q):
while True:
data = await q.get()
print(f"processing: {data}")
with ProcessPoolExecutor() as executor:
loop = asyncio.get_running_loop()
result = await loop.run_in_executor(executor, data_processor, data)
#if output of data_processing failed, send ticker back to queue to retrieve data again
if not result[2]:
print(f'{result[0]} data invalid. Retrieving again...')
await retrieve_data(result[0], q) # add a new task
q.task_done() # end this task
else:
q.task_done() # so that q.join() knows when the task is done
async def main(tickers):
q = asyncio.Queue()
producers = [asyncio.create_task(retrieve_data(ticker, q)) for ticker in tickers]
consumers = [asyncio.create_task(process_data(q))]
await asyncio.gather(*producers)
await q.join() # Implicitly awaits consumers, too. blocks until all items in the queue have been received and processed
for c in consumers:
c.cancel() #cancel the consumer tasks, which would otherwise hang up and wait endlessly for additional queue items to appear
'''
RUN IN JUPYTER NOTEBOOK
'''
start = time.perf_counter()
tickers = ['AAPL', 'AMZN', 'TSLA', 'C', 'F']
await main(tickers)
print(f'total elapsed time: {time.perf_counter() - start:0.2f}')
'''
RUN IN TERMINAL
'''
# if __name__ == "__main__":
# start = time.perf_counter()
# tickers = ['AAPL', 'AMZN', 'TSLA', 'C', 'F']
# asyncio.run(main(tickers))
# print(f'total elapsed time: {time.perf_counter() - start:0.2f}')
The data_processor() function below, called by process_data() above needs to be in a different cell in Jupyter notebook, or a separate module (from what I understand, to avoid a PicklingError)
from multiprocessing import current_process
def data_processor(data):
ticker = data[0]
price = data[1]
print(f'Started {ticker} - {current_process().name}')
start = time.perf_counter() # start time counter
time.sleep(random.randint(4, 5)) # mimic some random processing time
# pretend we're processing the price. Let the processing outcome be invalid if the price is an odd number
if price % 2==0:
is_valid = True
else:
is_valid = False
print(f"{ticker}'s price {price} validity: --{is_valid}--"
f' Elapsed time: {time.perf_counter() - start:0.2f} seconds')
return (ticker, price, is_valid)
THE ISSUES
Instead of using python's multiprocessing module, i used concurrent.futures' ProcessPoolExecutor, which I read is compatible with asyncio (What kind of problems (if any) would there be combining asyncio with multiprocessing?). But it seems that I have to choose between retrieving the output (result) of the function called by the executor and being able to run several subprocesses in parallel. With the construct below, the subprocesses run sequentially, not in parallel.
with ProcessPoolExecutor() as executor:
loop = asyncio.get_running_loop()
result = await loop.run_in_executor(executor, data_processor, data)
Removing result = await in front of loop.run_in_executor(executor, data_processor, data) allows to run several consumers in parallel, but then I can't collect their results from the parent process. I need the await for that. And then of course the remaining of the code block will fail.
How can I have these subprocesses run in parallel and provide the output? Perhaps it needs a different construct or something else than the producers-consumers model
the part of the code that requests invalid stock prices to be retrieved again works (provided I can get the result from above), but it is ran in the subprocess that calls it and blocks new consumers from being created until the request is fulfilled. Is there a way to address this?
#if output of data_processing failed, send ticker back to queue to retrieve data again
if not result[2]:
print(f'{result[0]} data invalid. Retrieving again...')
await retrieve_data(result[0], q) # add a new task
q.task_done() # end this task
else:
q.task_done() # so that q.join() knows when the task is done
But it seems that I have to choose between retrieving the output (result) of the function called by the executor and being able to run several subprocesses in parallel.
Luckily this is not the case, you can also use asyncio.gather() to wait for multiple items at once. But you obtain data items one by one from the queue, so you don't have a batch of items to process. The simplest solution is to just start multiple consumers. Replace
# the single-element list looks suspicious anyway
consumers = [asyncio.create_task(process_data(q))]
with:
# now we have an actual list
consumers = [asyncio.create_task(process_data(q)) for _ in range(16)]
Each consumer will wait for an individual task to finish, but that's ok because you'll have a whole pool of them working in parallel, which is exactly what you wanted.
Also, you might want to make executor a global variable and not use with, so that the process pool is shared by all consumers and lasts as long as the program. That way consumers will reuse the worker processes already spawned instead of having to spawn a new process for each job received from the queue. (That's the whole point of having a process "pool".) In that case you probably want to add executor.shutdown() at the point in the program where you don't need the executor anymore.
I am making use of aiohttp in one of my projects and would like to limit the number of requests made per second. I am using asyncio.Semaphore to do that. My challenge is I may want to increase/decrease the number of requests allowed per second.
For example:
limit = asyncio.Semaphore(10)
async with limit:
async with aiohttp.request(...)
...
await asyncio.sleep(1)
This works great. That is, it limits that aiohttp.request to 10 concurrent requests in a second. However, I may want to increase and decrease the Semaphore._value. I can do limit._value = 20 but I am not sure if this is the right approach or there is another way to do that.
Accessing the private _value attribute is not the right approach for at least two reasons: one that the attribute is private and can be removed, renamed, or change meaning in a future version without notice, and the other that increasing the limit won't be noticed by a semaphore that already has waiters.
Since asyncio.Semaphore doesn't support modifying the limit dynamically, you have two options: implementing your own Semaphore class that does support it, or not using a Semaphore at all. The latter is probably easier as you can always replace a semaphore-enforced limit with a fixed number of worker tasks that receive jobs through a queue. Assuming you currently have code that looks like this:
async def fetch(limit, arg):
async with limit:
# your actual code here
return result
async def tweak_limit(limit):
# here you'd like to be able to increase the limit
async def main():
limit = asyncio.Semaphore(10)
asyncio.create_task(tweak_limit(limit))
results = await asyncio.gather(*[fetch(limit, x) for x in range(1000)])
You could express it without a semaphore by creating workers in advance and giving them work to do:
async def fetch_task(queue, results):
while True:
arg = await queue.get()
# your actual code here
results.append(result)
queue.task_done()
async def main():
# fill the queue with jobs for the workers
queue = asyncio.Queue()
for x in range(1000):
await queue.put(x)
# create the initial pool of workers
results = []
workers = [asyncio.create_task(fetch_task(queue, results))
for _ in range(10)]
asyncio.create_task(tweak_limit(workers, queue, results))
# wait for workers to process the entire queue
await queue.join()
# finally, cancel the now-idle worker tasks
for w in workers:
w.cancel()
# results are now available
The tweak_limit() function can now increase the limit simply by spawning new workers:
async def tweak_limit(workers, queue, results):
while True:
await asyncio.sleep(1)
if need_more_workers:
workers.append(asyncio.create_task(fetch_task(queue, results)))
Using workers and queues is a more complex solution, you have to think about issues like setup, teardown, exception handling and backpressure, etc.
Semaphore can be implemented with Lock, if you don't mind abit of inefficiency (you will see why), here's a simple implemention for a dynamic-value semaphore:
class DynamicSemaphore:
def __init__(self, value=1):
self._lock = asyncio.Lock()
if value < 0:
raise ValueError("Semaphore initial value must be >= 0")
self.value = value
async def __aenter__(self):
await self.acquire()
return None
async def __aexit__(self, exc_type, exc, tb):
self.release()
def locked(self):
return self.value == 0
async def acquire(self):
async with self._lock:
while self.value <= 0:
await asyncio.sleep(0.1)
self.value -= 1
return True
def release(self):
self.value += 1
Using Discord.py-rewrite, How can we diagnose my_background_task to find the reason why its print statement is not printing every 3 seconds?
Details:
The problem that I am observing is that "print('inside loop')" is printed once in my logs, but not the expected 'every three seconds'. Could there be an exception somewhere that I am not catching?
Note: I do see print(f'Logged in as {bot.user.name} - {bot.user.id}') in the logs so on_ready seems to work, so that method cannot be to blame.
I tried following this example: https://github.com/Rapptz/discord.py/blob/async/examples/background_task.py
however I did not use its client = discord.Client() statement because I think I can achieve the same using "bot" similar to as explained here https://stackoverflow.com/a/53136140/6200445
import asyncio
import discord
from discord.ext import commands
token = open("token.txt", "r").read()
def get_prefix(client, message):
prefixes = ['=', '==']
if not message.guild:
prefixes = ['=='] # Only allow '==' as a prefix when in DMs, this is optional
# Allow users to #mention the bot instead of using a prefix when using a command. Also optional
# Do `return prefixes` if u don't want to allow mentions instead of prefix.
return commands.when_mentioned_or(*prefixes)(client, message)
bot = commands.Bot( # Create a new bot
command_prefix=get_prefix, # Set the prefix
description='A bot for doing cool things. Commands list:', # description for the bot
case_insensitive=True # Make the commands case insensitive
)
# case_insensitive=True is used as the commands are case sensitive by default
cogs = ['cogs.basic','cogs.embed']
#bot.event
async def on_ready(): # Do this when the bot is logged in
print(f'Logged in as {bot.user.name} - {bot.user.id}') # Print the name and ID of the bot logged in.
for cog in cogs:
bot.load_extension(cog)
return
async def my_background_task():
await bot.wait_until_ready()
print('inside loop') # This prints one time. How to make it print every 3 seconds?
counter = 0
while not bot.is_closed:
counter += 1
await bot.send_message(channel, counter)
await channel.send(counter)
await asyncio.sleep(3) # task runs every 3 seconds
bot.loop.create_task(my_background_task())
bot.run(token)
[]
From a cursory inspection, it would seem your problem is that you are only calling it once. Your method my_background_task is not called once every three seconds. It is instead your send_message method that is called once every three seconds. For intended behavior, place the print statement inside your while loop.
Although I am using rewrite, I found both of these resources helpful.
https://github.com/Rapptz/discord.py/blob/async/examples/background_task.py
https://github.com/Rapptz/discord.py/blob/rewrite/examples/background_task.py