I have 4 intents in my Lex bot , logic of these intents is very similar with slight change in business rules
Is it a good practice to implement one lambda function and based on different intent you can call different functions ?
Can this approach introduce any potential bottle neck or performance impact ?
There are no issues in using a single Lambda function for different intents. You can just call a single lambda function in all the intents, check the intent in that lambda and call relevant function/method in same lambda.
As you told, intents are very similar, so you could probably also use common functions for doing similar things for those intents.
def common_function():
# some processing
return cm
def intent2(intent_request):
cm = common_function()
# rest processing
return output
def intent1(intent_request):
cm = common_function()
# rest processing
return output
def dispatch(intent_request):
logger.debug('dispatch userId={}, intentName={}'.format(intent_request['userId'], intent_request['currentIntent']['name']))
intent_name = intent_request['currentIntent']['name']
if intent_name == 'intent1':
return intent1(intent_request)
if intent_name == 'intent2':
return intent2(intent_request)
if intent_name == 'intent3':
return intent3(intent_request)
if intent_name == 'intent4':
return intent4(intent_request)
raise Exception('Intent with name ' + intent_name + ' not supported')
def lambda_handler(event, context):
logger.debug(event)
logger.debug('event.bot.name={}'.format(event['bot']['name']))
return dispatch(event)
Related
I have two async function like below
async def get_job_details(key: str) -> dict:
...
return data
async def get_resource(data: dict) -> dict:
...
return data
I want to call both functions in loop using pydash flow like below
py_.flow(await get_job_details, await get_resource)(key)
Getting error TypeError: object function can't be used in 'await' expression
This works fine without async. Even if I call them without flow that works too
data = await get_job_details(key)
data = await get_resource(data)
But I want to use some looping here , since there is a possibility of having more function call and must be called sequentially since they are dependent .
I am going to create a discordpy bot that can check users server credit of Tatsu bot
I use Tatsu API to get user's credit, but it has problem that is 'object has no attribute 'credits''. It also appears when I use avatar_url, avatar_hash,...
This is Tatsu library: https://github.com/PumPum7/Tatsu.py
#commands.command()
async def transfer(self,ctx, member: discord.Member):
wrapper = ApiWrapper(key=os.environ['token'])
user_profile = await wrapper.get_profile(member.id)
await ctx.send(user_profile.credits)
I've taken a look at the source code of the library (it's a really bad one to be honest). It seems that when an internal exception is throw, instead of raising and propagating it, the author decided to return it (exact lines are here), I have no idea what the author wanted to do with that, nonetheless you can use a simple if-statement to check if the method didn't return an error:
#commands.command()
async def transfer(self,ctx, member: discord.Member):
wrapper = ApiWrapper(key=os.environ['token'])
user_profile = await wrapper.get_profile(member.id)
if not isinstance(user_profile, Exception):
await ctx.send(user_profile.credits)
else:
exc = user_profile
print(f"An error happened:\n{exc.__class__.__name__}: {exc}")
I have multiple api routes which return data by querying database individually.
Now I'm trying to build dashboard which queries above api's. How should I put api calls in the queue so that they are executed asynchronously?
I tried
await queue.put({'response_1': await api_1(**kwargs), 'response_2': await api_2(**kwargs)})
It seems as though data is returned while task is being put in the queue.
Now I'm using
await queue.put(('response_1', api_1(**args_dict)))
in producer and in consumer I'm parsing tuple and making api calls which I think I'm doing wrong .
Question1
Is there a better way to do it?
This is code I'm using to create tasks
producers = [create_task(producer(**args_dict, queue)) for row in stats]
consumers = [create_task(consumer(queue)) for row in stats]
await gather(*producers)
await queue.join()
for con in consumers:
con.cancel()
Question2 Should I use create_task or ensure_future? Sorry if it's repetitive but I can't understand the difference and after searching online I became more confused.
I'm using FastAPI, databases(async) packages.
I'm using tuple instead of dictionary like await queue.put('response_1', api_1(**kwargs))
./app/dashboard.py:90: RuntimeWarning: coroutine 'api_1' was never awaited
item: Tuple = await queue.get_nowait()
My code for consumer is
async def consumer(return_obj: dict, que: Queue):
item: Tuple = await queue.get_nowait()
print(f'consumer took {item[0]} from queue')
return_obj.update({f'{item[0]}': await item[1]})
await queue.task_done()
if I don't use get_nowait consumer gets stuck because queue may be empty,
but if I use get_nowait above error is shown.
I haven't defined max queue length
-----------EDIT-----------
Producer
async def producer(queue: Queue, **kwargs):
await queue.put('response_1', api_1(**kwargs))
You can drop the await from your first snippet and send the coroutine object in the queue. A coroutine object is a coroutine that was called, but not yet awaited.
# producer:
await queue.put({'response_1': api_1(**kwargs),
'response_2': api_2(**kwargs)})
...
# consumer:
while True:
dct = await queue.get()
for name, api_coro in dct:
result = await api_coro
print('result of', name, ':', result)
Should I use create_task or ensure_future?
If the argument is the result of invoking a coroutine function, you should use create_task (see this comment by Guido for explanation). As the name implies, it will return a Task instance that drives that coroutine. The task can also be awaited, but it continues to run in the background.
ensure_future is a much more specialized function that converts various kinds of awaitable objects to their corresponding futures. It is useful when implementing functions like asyncio.gather() which accept different kinds of awaitable objects for convenients, and need to convert them into futures before working with them.
I'm writing some training material for the Groovy language and I'm preparing an example which would explain Closures.
The example is a simple caching closure for "expensive" methods, withCache
def expensiveMethod( Long a ) {
withCache (a) {
sleep(rnd())
a*5
}
}
So, now my question is: which of the two following implementations would be the fastest and more idiomatic in Groovy?
def withCache = {key, Closure operation ->
if (!cacheMap.containsKey(key)) {
cacheMap.put(key, operation())
}
cacheMap.get(key)
}
or
def withCache = {key, Closure operation ->
def cached = cacheMap.get(key)
if (cached) return cached
def res = operation()
cacheMap.put(key, res)
res
}
I prefer the first example, as it doesn't use any variable but I wonder if accessing the get method of the Map is slower than returning the variable containing the computed result.
Obviously the answer is "it depends on the size of the Map" but, out of curiosity, I would like to have the opinion of the community.
Thanks!
Firstly I agree with OverZealous, that worrying about two get operations is a premature optimization. The second exmaple is also not equal to the first. The first allows null for example, while the second on uses Groovy-Truth in the if, which means that null evals to false, as does for example an empty list/array/map. So if you want to show calling Closure I would go with the first one. If you want something more idiomatic I would do this instead for your case:
def expensiveMethod( Long a ) {
sleep(rnd())
a*5
}
def cache = [:].withDefault this.&expensiveMethod
Is it possible to use =LOAD(...) with a function rather then controller/function string
e.g:
Controller:
def test():
print "test"
def index():
return dict(test=test)
View:
{{=LOAD(test, ajax=True)}}
rather then:
View:
{{=LOAD('controller', 'test', ajax=True)}}
The main reason being, I want to use lambda/generated functions which cannot be accessed this way.
No. But not because the syntax is not supported, because it is logically impossible: the LOAD() is executed in a different http request than the one in which the lambda would be executed and therefore the latter would be undefined. Moreover to perform the ajax callback, the called function must have a name, cannot be a lambda. We could come up with a creative use of cache so that LOAD stores the lambda in cache:
def callback():
""" a generic callback """
return cache.ram(request.args(0),lambda:None,None)(**request.vars)
def LOAD2(f,vars={}):
""" a new load function """
import uuid
u = str(uuid.uuid4())
cache.ram(u,lambda f=f:f,0)
return LOAD(request.controller,'callback',args=u,vars=vars,ajax=True)
def index():
""" example of usage """
a = LOAD2(lambda:'hello world')
return dict(a=a)
But this would only work with cache.ram and would require periodic cache cleanup.