Is there a way to check for another auto complete that was already auto completed, i.e checking if an opition was x in another auto complete for y - discord.py

Specifically, I'm trying to make a client sided Discord music player and require it to play specific albums (for organization, and also high quality audio) and cannot figure out for the life of me of how to get the variable from another autocomplete that was already completed beforew the 2nd autocomplete.
I tried checking specifically for the first auto complete using #bot.tree.varName which obviously didnt work, and namespace doesn't seem like there is no way to specifically find a slash command auto complete.
In other words, :
#First Auto Complete
#playalbum.autocomplete("album")
async def playalbum_autocomplete(
interaction: discord.Interaction,
current: str,
) -> list[app_commands.Choice[str]]:
#Would define from index, but not for now.
AlbumList = ["test"]
return [
app_commands.Choice(name="album", value="album")
for album in AlbumList if current.lower() in album.lower()
]
#Starting on the next AutoComplete
#playalbum.autocomplete("song")
async def playalbum_autocomplete(
interaction: discord.Interaction,
current: str,
) -> list[app_commands.Choice[str]]:
#Would define from index, but not for now.
if app_commands.Namespace.album == "test": #Point that didn't work.
songList = ["testSong"]
return [
app_commands.Choice(name="song", value="song")
for song in songList if current.lower() in album.lower()
]

Related

How to make bot respond without prefix in certain situation while still responding to other commands in discord.py

I'm making a game using discord.py and the bot has a few commands already, but once the game itself starts I want users to be able to input answers without requiring a prefix but I'm not sure how to do this. I tried using on_message for this situation to immediately, and while I'm pretty sure the on_message subroutine works how I want, adding it causes the other subroutines to stop being executed. Is there another way to do this? Here's a bit of the code:
bot = commands.Bot(command_prefix="-") #sets the bot prefix
#bot.event
async def on_message(ctx): #accepts all answers without needing a command
if len(playerQuestions[0]) == True and (str(ctx.channel.name) == channels[0] or str(ctx.channel.name) == channels[1]):
await guess(ctx, ctx.content)
#bot.command()
async def join(ctx): #subroutine for the join command - allows users to join the lobby before the game begins
if gameInProgress == True: #blocks the command from being used if a game is in progress
await ctx.send("There is currently a game in progress, so this action cannot currently be performed.")
else:
response = ""
flag = False
for i in range(0, len(players)): #compares sender of message to each player in the lobby to check if user is already in the lobby
if ctx.author == players[i]:
await ctx.channel.send("You are already in the lobby")
flag = True
if flag == False: #adds the user to the lobby if not already in and announces it
players.append(ctx.author)
response = (ctx.author.mention) + " has joined the lobby. \n"
response = response + (str(len(players)) + " in lobby. \n")
await ctx.send(response)
so if someone types "-join" and the requirements therefore aren't met in the on_message subroutine, how could I make the program execute the code in the join subroutine?

Async tasks returning no results

I have around 10-15 ecto queries which I want to run async in my API code.
I am using Task.async and Task.yeild_many
Following is the code for async task -
def get_tasks() do
task_1 =
Task.async(SomeModule, :some_function, [
param_1,
param_2
])
task_2 =
Task.async(SomeModule, :some_function, [
param_1,
param_2
])
task_3 =
Task.async(SomeModule, :some_function, [
param_1,
param_2
])
[task_1, task_2, task_3]
end
I get the tasks in my main function as -
[
{_, task_1},
{_, task_2},
{_, task_3}
] =
[
task_1,
task_2,
task_3,
]
|> MyCode.TaskHelper.yeild_multiple_tasks()
And my task helper code is given below -
defmodule MyCode.TaskHelper do
def get_results_or_shutdown(tasks_with_results) do
Enum.map(tasks_with_results, fn {task, res} ->
res || Task.shutdown(task, :brutal_kill)
end)
end
#doc """
Returns the result of multiple tasks ran parallely
## Parameters
- task_list: list, a list of all tasks
"""
def yeild_multiple_tasks(task_list) do
task_list
|> Task.yield_many()
|> get_results_or_shutdown()
end
end
Each tasks are ecto queries.
The issue is the tasks are behaving randomly. Sometimes they return results, sometimes they don't. But there was no time when all the tasks have return results (for the representational purpose I have written 3 tasks,
but I have around 10-15 async tasks).
I ran the code synchronously, and it returned the correct results (obviously). I tried changing the pool_size in config to 50 for Repo, but to no avail.
Can someone please help me out with this? I am quite stuck here.

Why does discord.py-rewrite bot.loop.create_task(my_background_task()) execute once but does not repeat

Using Discord.py-rewrite, How can we diagnose my_background_task to find the reason why its print statement is not printing every 3 seconds?
Details:
The problem that I am observing is that "print('inside loop')" is printed once in my logs, but not the expected 'every three seconds'. Could there be an exception somewhere that I am not catching?
Note: I do see print(f'Logged in as {bot.user.name} - {bot.user.id}') in the logs so on_ready seems to work, so that method cannot be to blame.
I tried following this example: https://github.com/Rapptz/discord.py/blob/async/examples/background_task.py
however I did not use its client = discord.Client() statement because I think I can achieve the same using "bot" similar to as explained here https://stackoverflow.com/a/53136140/6200445
import asyncio
import discord
from discord.ext import commands
token = open("token.txt", "r").read()
def get_prefix(client, message):
prefixes = ['=', '==']
if not message.guild:
prefixes = ['=='] # Only allow '==' as a prefix when in DMs, this is optional
# Allow users to #mention the bot instead of using a prefix when using a command. Also optional
# Do `return prefixes` if u don't want to allow mentions instead of prefix.
return commands.when_mentioned_or(*prefixes)(client, message)
bot = commands.Bot( # Create a new bot
command_prefix=get_prefix, # Set the prefix
description='A bot for doing cool things. Commands list:', # description for the bot
case_insensitive=True # Make the commands case insensitive
)
# case_insensitive=True is used as the commands are case sensitive by default
cogs = ['cogs.basic','cogs.embed']
#bot.event
async def on_ready(): # Do this when the bot is logged in
print(f'Logged in as {bot.user.name} - {bot.user.id}') # Print the name and ID of the bot logged in.
for cog in cogs:
bot.load_extension(cog)
return
async def my_background_task():
await bot.wait_until_ready()
print('inside loop') # This prints one time. How to make it print every 3 seconds?
counter = 0
while not bot.is_closed:
counter += 1
await bot.send_message(channel, counter)
await channel.send(counter)
await asyncio.sleep(3) # task runs every 3 seconds
bot.loop.create_task(my_background_task())
bot.run(token)
[]
From a cursory inspection, it would seem your problem is that you are only calling it once. Your method my_background_task is not called once every three seconds. It is instead your send_message method that is called once every three seconds. For intended behavior, place the print statement inside your while loop.
Although I am using rewrite, I found both of these resources helpful.
https://github.com/Rapptz/discord.py/blob/async/examples/background_task.py
https://github.com/Rapptz/discord.py/blob/rewrite/examples/background_task.py

Save Google Cloud Speech API operation(job) object to retrieve results later

I'm struggling to use the Google Cloud Speech Api with the ruby client (v0.22.2).
I can execute long running jobs and can get results if I use
job.wait_until_done!
but this locks up a server for what can be a long period of time.
According to the API docs, all I really need is the operation name(id).
Is there any way of creating a job object from the operation name and retrieving it that way?
I can't seem to create a functional new job object such as to use the id from #grpc_op
What I want to do is something like:
speech = Google::Cloud::Speech.new(auth_credentials)
job = speech.recognize_job file, options
saved_job = job.to_json #Or some element of that object such that I can retrieve it.
Later, I want to do something like....
job_object = Google::Cloud::Speech::Job.new(saved_job)
job.reload!
job.done?
job.results
Really hoping that makes sense to somebody.
Struggling quite a bit with google's ruby clients on the basis that everything seems to be translated into objects which are much more complex than the ones required to use the API.
Is there some trick that I'm missing here?
You can monkey-patch this functionality to the version you are using, but I would advise upgrading to google-cloud-speech 0.24.0 or later. With those more current versions you can use Operation#id and Project#operation to accomplish this.
require "google/cloud/speech"
speech = Google::Cloud::Speech.new
audio = speech.audio "path/to/audio.raw",
encoding: :linear16,
language: "en-US",
sample_rate: 16000
op = audio.process
# get the operation's id
id = op.id #=> "1234567890"
# construct a new operation object from the id
op2 = speech.operation id
# verify the jobs are the same
op.id == op2.id #=> true
op2.done? #=> false
op2.wait_until_done!
op2.done? #=> true
results = op2.results
Update Since you can't upgrade, you can monkey-patch this functionality to an older-version using the workaround described in GoogleCloudPlatform/google-cloud-ruby#1214:
require "google/cloud/speech"
# Add monkey-patches
module Google
Module Cloud
Module Speech
class Job
def id
#grpc.name
end
end
class Project
def job id
Job.from_grpc(OpenStruct.new(name: id), speech.service).refresh!
end
end
end
end
end
# Use the new monkey-patched methods
speech = Google::Cloud::Speech.new
audio = speech.audio "path/to/audio.raw",
encoding: :linear16,
language: "en-US",
sample_rate: 16000
job = audio.recognize_job
# get the job's id
id = job.id #=> "1234567890"
# construct a new operation object from the id
job2 = speech.job id
# verify the jobs are the same
job.id == job2.id #=> true
job2.done? #=> false
job2.wait_until_done!
job2.done? #=> true
results = job2.results
Ok. Have a very ugly way of solving the issue.
Get the id of the Operation from the job object
operation_id = job.grpc.grpc_op.name
Get an access token to manually use the RestAPI
json_key_io = StringIO.new(ENV["GOOGLE_CLOUD_SPEECH_JSON_KEY"])
authorisation = Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io:json_key_io,
scope:"https://www.googleapis.com/auth/cloud-platform"
)
token = authorisation.fetch_access_token!
Make an api call to retrieve the operation details.
This will return with a "done" => true parameter, once results are in and will display the results. If "done" => true isn't there then you'll have to poll again later until it is.
HTTParty.get(
"https://speech.googleapis.com/v1/operations/#{operation_id}",
headers: {"Authorization" => "Bearer #{token['access_token']}"}
)
There must be a better way of doing that. Seems such an obvious use case for the speech API.
Anyone from google in the house who can explain a much simpler/cleaner way of doing it?

Why does using asyncio.ensure_future for long jobs instead of await run so much quicker?

I am downloading jsons from an api and am using the asyncio module. The crux of my question is, with the following event loop as implemented as this:
loop = asyncio.get_event_loop()
main_task = asyncio.ensure_future( klass.download_all() )
loop.run_until_complete( main_task )
and download_all() implemented like this instance method of a class, which already has downloader objects created and available to it, and thus calls each respective download method:
async def download_all(self):
""" Builds the coroutines, uses asyncio.wait, then sifts for those still pending, loops """
ret = []
async with aiohttp.ClientSession() as session:
pending = []
for downloader in self._downloaders:
pending.append( asyncio.ensure_future( downloader.download(session) ) )
while pending:
dne, pnding= await asyncio.wait(pending)
ret.extend( [d.result() for d in dne] )
# Get all the tasks, cannot use "pnding"
tasks = asyncio.Task.all_tasks()
pending = [tks for tks in tasks if not tks.done()]
# Exclude the one that we know hasn't ended yet (UGLY)
pending = [t for t in pending if not t._coro.__name__ == self.download_all.__name__]
return ret
Why is it, that in the downloaders' download methods, when instead of the await syntax, I choose to do asyncio.ensure_future instead, it runs way faster, that is more seemingly "asynchronously" as I can see from the logs.
This works because of the way I have set up detecting all the tasks that are still pending, and not letting the download_all method complete, and keep calling asyncio.wait.
I thought that the await keyword allowed the event loop mechanism to do its thing and share resources efficiently? How come doing it this way is faster? Is there something wrong with it? For example:
async def download(self, session):
async with session.request(self.method, self.url, params=self.params) as response:
response_json = await response.json()
# Not using await here, as I am "supposed" to
asyncio.ensure_future( self.write(response_json, self.path) )
return response_json
async def write(self, res_json, path):
# using aiofiles to write, but it doesn't (seem to?) support direct json
# so converting to raw text first
txt_contents = json.dumps(res_json, **self.json_dumps_kwargs);
async with aiofiles.open(path, 'w') as f:
await f.write(txt_contents)
With full code implemented and a real API, I was able to download 44 resources in 34 seconds, but when using await it took more than three minutes (I actually gave up as it was taking so long).
When you do await in each iteration of for loop it will await to download every iteration.
When you do ensure_future on the other hand it doesn't it creates task to download all the files and then awaits all of them in second loop.

Resources