I have a command to check the performance of another command which returns info like how long the command took to run and if an error occured within it. But this only works with commands, that don't have a permission limit like having administrator perms.
How can I fix this so that I can bypass the permissions limit of the command that will be checked the performance of?
The code that I currently have is:
#commands.command(hidden=True)
#is_owner()
async def perf(self, ctx, *, command):
await asyncio.sleep(0.25)
await ctx.message.delete()
"""Checks the timing of a command, attempting to suppress HTTP and DB calls."""
msg = copy.copy(ctx.message)
msg.content = ctx.prefix + command
new_ctx = await self.bot.get_context(msg, cls=type(ctx))
new_ctx._db = PerformanceMocker()
# Intercepts the Messageable interface a bit
new_ctx._state = PerformanceMocker()
new_ctx.channel = PerformanceMocker()
new_ctx.author = ctx.author
if new_ctx.command is None:
return await ctx.send('No command found')
print(new_ctx.content)
print(new_ctx.author.permissions)
start = time.perf_counter()
try:
await new_ctx.command.invoke(new_ctx)
except commands.CommandError:
end = time.perf_counter()
success = False
try:
await ctx.send(f'```py\n{traceback.format_exc()}\n```')
except discord.HTTPException:
pass
else:
end = time.perf_counter()
success = True
await ctx.send(f'Status: {success} Time: {(end - start) * 1000:.2f}ms')
I would suggest checking out jishaku, which has a builtin debug command that will output any errors and the total time taken.
To answer your question directly, you should take a look at commands.Command.__call__ which will bypass all checks, converters, and cooldowns.
I want to create an asynchronous SDK using aiohttp client for our service. I haven't been able to figure out how to throttle the ClientSession() to make only N requests per second.
class AsyncHTTPClient:
def __init__(self, api_token, per_second_limit=10):
self._client = aiohttp.ClienSession(
headers = {"Authorization": "Bearer f{api_token}"}
)
self._throttler = asyncio.Semaphore(per_second_limit)
async def _make_request(self, method, url, **kwargs):
async with self._throttler:
return await self._client.request(method, url, **kwargs)
async def get(self, url, **params):
return await self._make_request("GET", url, **params)
async def close(self):
await self._client.close()
I have this class with get, post, patch, put, delete methods implemented as a call to _make_request.
#As a user of the SDK I run the following code.
async def main():
try:
urls = [some_url * 100]
client = AsyncHTTPClient(my_token, per_second_limit=20)
await asyncio.gather(*[client.get(url) for url in urls])
finally:
await client.close()
asyncio.run(main())
asyncio.Semaphore limits the concurrency. That is, when the main() function is called, the async with self._throttler used in client._make_request limits concurrency to 20 requests. However, if the 20 requests finished within 1 second, then requests will be continuously made. What I want to do is make sure that only N requests (i.e. 20) are made in a second. If all 20 requests finished in 0.8 seconds, then sleep for 0.2 seconds and then process the requests again.
I looked up some asyncio.Queue examples with workers example but I am not sure how I will I implement it in my SDK since creating workers will have to be done by the user using this SDK and I want to avoid that, I want AsyncHTTPClient to handle the requests per second limit.
Any suggestions/advise/samples will be greatly appreciated.
I am very new with aiohttp and asyncio so apologies for my ignorance up front. I am having difficulties with the event loop portion of the documentation and don't think my below code is executing asynchronously. I am trying to take the output of all combinations of two lists via itertools, and POST to XML. A more full blown version is listed here while using the requests module, however that is not ideal as I am needing to POST 1000+ requests potentially at a time. Here is a sample of how it looks now:
import aiohttp
import asyncio
import itertools
skillid = ['7715','7735','7736','7737','7738','7739','7740','7741','7742','7743','7744','7745','7746','7747','7748' ,'7749','7750','7751','7752','7753','7754','7755','7756','7757','7758','7759','7760','7761','7762','7763','7764','7765','7766','7767','7768','7769','7770','7771','7772','7773','7774','7775','7776','7777','7778','7779','7780','7781','7782','7783','7784']
agent= ['5124','5315','5331','5764','6049','6076','6192','6323','6669','7690','7716']
url = 'https://url'
user = 'user'
password = 'pass'
headers = {
'Content-Type': 'application/xml'
}
async def main():
async with aiohttp.ClientSession() as session:
for x in itertools.product(agent,skillid):
payload = "<operation><operationType>update</operationType><refURLs><refURL>/unifiedconfig/config/agent/" + x[0] + "</refURL></refURLs><changeSet><agent><skillGroupsRemoved><skillGroup><refURL>/unifiedconfig/config/skillgroup/" + x[1] + "</refURL></skillGroup></skillGroupsRemoved></agent></changeSet></operation>"
async with session.post(url,auth=aiohttp.BasicAuth(user, password), data=payload,headers=headers) as resp:
print(resp.status)
print(await resp.text())
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
I see that coroutines can be used but not sure that applies as there is only a single task to execute. Any clarification is appreciated.
Because you're making a request and then immediately await-ing on it, you are only making one request at a time. If you want to parallelize everything, you need to separate making the request from waiting for the response, and you need to use something like asyncio.gather to wait for the requests in bulk.
In the following example, I've modified your code to connect to a local httpbin instance for testing; I'm making requests to the /delay/<value> endpoint so that each requests takes a random amount of time to complete.
The theory of operation here is:
Move the request code into the asynchronous one_request function,
which we use to build an array of tasks.
Use asyncio.gather to run all the tasks at once.
The one_request functions returns a (agent, skillid, response)
tuple, so that when we iterate over the responses we can tell which
combination of parameters resulted in the given response.
import aiohttp
import asyncio
import itertools
import random
skillid = [
"7715", "7735", "7736", "7737", "7738", "7739", "7740", "7741", "7742",
"7743", "7744", "7745", "7746", "7747", "7748", "7749", "7750", "7751",
"7752", "7753", "7754", "7755", "7756", "7757", "7758", "7759", "7760",
"7761", "7762", "7763", "7764", "7765", "7766", "7767", "7768", "7769",
"7770", "7771", "7772", "7773", "7774", "7775", "7776", "7777", "7778",
"7779", "7780", "7781", "7782", "7783", "7784",
]
agent = [
"5124", "5315", "5331", "5764", "6049", "6076", "6192", "6323", "6669",
"7690", "7716",
]
user = 'user'
password = 'pass'
headers = {
'Content-Type': 'application/xml'
}
async def one_request(session, agent, skillid):
# I'm setting `url` here because I want a random parameter for
# reach request. You would probably just set this once globally.
delay = random.randint(0, 10)
url = f'http://localhost:8787/delay/{delay}'
payload = (
"<operation>"
"<operationType>update</operationType>"
"<refURLs>"
f"<refURL>/unifiedconfig/config/agent/{agent}</refURL>"
"</refURLs>"
"<changeSet>"
"<agent>"
"<skillGroupsRemoved><skillGroup>"
f"<refURL>/unifiedconfig/config/skillgroup/{skillid}</refURL>"
"</skillGroup></skillGroupsRemoved>"
"</agent>"
"</changeSet>"
"</operation>"
)
# This shows when the task actually executes.
print('req', agent, skillid)
async with session.post(
url, auth=aiohttp.BasicAuth(user, password),
data=payload, headers=headers) as resp:
return (agent, skillid, await resp.text())
async def main():
tasks = []
async with aiohttp.ClientSession() as session:
# Add tasks to the `tasks` array
for x in itertools.product(agent, skillid):
task = asyncio.ensure_future(one_request(session, x[0], x[1]))
tasks.append(task)
print(f'making {len(tasks)} requests')
# Run all the tasks and wait for them to complete. Return
# values will end up in the `responses` list.
responses = await asyncio.gather(*tasks)
# Just print everything out.
print(responses)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
The above code results in about 561 requests, and runs in about 30
seconds with the random delay I've introduced.
This code runs all the requests at once. If you wanted to limit the
maximum number of concurrent requests, you could introduce a
Semaphore to make one_request block if there were too many active requests.
If you wanted to process responses as they arrived, rather than
waiting for everything to complete, you could investigate the
asyncio.wait method instead.
From curl's manpage
Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out.
So if using
curl \
--retry 9999 \
--continue-at - \
https://mydomain.test/some.file.bin \
| target-program
and the download fails (once) half-way through, and the server supports range requests, will curl retry, via a range request, so target-program receives the full bytes of some.file.bin as its input?
From testing, curl will not retry using a range request.
I wrote a broken HTTP server, requiring the client to retry using a range-request to get a full response. Using wget
wget -O - http://127.0.0.1:8888/ | less
results in the full response
abcdefghijklmnopqrstuvwxyz
and I can see on the server side there way a request with 'Range': 'bytes=24-' in the request headers.
However, using curl
curl --retry 9999 --continue-at - http://127.0.0.1:8888/ | less
results in only the incomplete response, and no range request in the server log.
abcdefghijklmnopqrstuvwx
The Python server used
import asyncio
import re
from aiohttp import web
async def main():
data = b'abcdefghijklmnopqrstuvwxyz'
async def handle(request):
print(request.headers)
# A too-short response with an exception that will close the
# connection, so the client should retry
if 'Range' not in request.headers:
start = 0
end = len(data) - 2
data_to_send = data[start:end]
headers = {
'Content-Length': str(len(data)),
'Accept-Ranges': 'bytes',
}
print('Sending headers', headers)
print('Sending data', data_to_send)
response = web.StreamResponse(
headers=headers,
status=200,
)
await response.prepare(request)
await response.write(data_to_send)
raise Exception()
# Any range request
match = re.match(r'^bytes=(?P<start>\d+)-(?P<end>\d+)?$', request.headers['Range'])
start = int(match['start'])
end = \
int(match['end']) + 1 if match['end'] else \
len(data)
data_to_send = data[start:end + 1]
headers = {
'Content-Range': 'bytes {}-{}/{}'.format(start, end - 1, len(data)),
'Content-Length': str(len(data_to_send)),
}
print('Sending headers', headers)
print('Sending data', data_to_send)
response = web.StreamResponse(
headers=headers,
status=206
)
await response.prepare(request)
await response.write(data_to_send)
await response.write_eof()
return response
app = web.Application()
app.add_routes([web.get(r'/', handle)])
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, '0.0.0.0', 8888)
await site.start()
await asyncio.Future()
asyncio.run(main())