web2py: using function in LOAD (ajax) - ajax

Is it possible to use =LOAD(...) with a function rather then controller/function string
e.g:
Controller:
def test():
print "test"
def index():
return dict(test=test)
View:
{{=LOAD(test, ajax=True)}}
rather then:
View:
{{=LOAD('controller', 'test', ajax=True)}}
The main reason being, I want to use lambda/generated functions which cannot be accessed this way.

No. But not because the syntax is not supported, because it is logically impossible: the LOAD() is executed in a different http request than the one in which the lambda would be executed and therefore the latter would be undefined. Moreover to perform the ajax callback, the called function must have a name, cannot be a lambda. We could come up with a creative use of cache so that LOAD stores the lambda in cache:
def callback():
""" a generic callback """
return cache.ram(request.args(0),lambda:None,None)(**request.vars)
def LOAD2(f,vars={}):
""" a new load function """
import uuid
u = str(uuid.uuid4())
cache.ram(u,lambda f=f:f,0)
return LOAD(request.controller,'callback',args=u,vars=vars,ajax=True)
def index():
""" example of usage """
a = LOAD2(lambda:'hello world')
return dict(a=a)
But this would only work with cache.ram and would require periodic cache cleanup.

Related

async python functions call in loop using pydash flow

I have two async function like below
async def get_job_details(key: str) -> dict:
...
return data
async def get_resource(data: dict) -> dict:
...
return data
I want to call both functions in loop using pydash flow like below
py_.flow(await get_job_details, await get_resource)(key)
Getting error TypeError: object function can't be used in 'await' expression
This works fine without async. Even if I call them without flow that works too
data = await get_job_details(key)
data = await get_resource(data)
But I want to use some looping here , since there is a possibility of having more function call and must be called sequentially since they are dependent .

Using one Lambda function to handle multiple intent

I have 4 intents in my Lex bot , logic of these intents is very similar with slight change in business rules
Is it a good practice to implement one lambda function and based on different intent you can call different functions ?
Can this approach introduce any potential bottle neck or performance impact ?
There are no issues in using a single Lambda function for different intents. You can just call a single lambda function in all the intents, check the intent in that lambda and call relevant function/method in same lambda.
As you told, intents are very similar, so you could probably also use common functions for doing similar things for those intents.
def common_function():
# some processing
return cm
def intent2(intent_request):
cm = common_function()
# rest processing
return output
def intent1(intent_request):
cm = common_function()
# rest processing
return output
def dispatch(intent_request):
logger.debug('dispatch userId={}, intentName={}'.format(intent_request['userId'], intent_request['currentIntent']['name']))
intent_name = intent_request['currentIntent']['name']
if intent_name == 'intent1':
return intent1(intent_request)
if intent_name == 'intent2':
return intent2(intent_request)
if intent_name == 'intent3':
return intent3(intent_request)
if intent_name == 'intent4':
return intent4(intent_request)
raise Exception('Intent with name ' + intent_name + ' not supported')
def lambda_handler(event, context):
logger.debug(event)
logger.debug('event.bot.name={}'.format(event['bot']['name']))
return dispatch(event)

Extracting the function and arguments from a coroutine

Is it possible to extract the function and arguments of a coroutine object in python3.6?
Context: currently I have something like this:
async def func(*args):
...
ret = await remotely(func, x, y)
Under the hood, remotely pickles func, x, and y, scp's that to a different server, where it unpickles them, executes func(x,y), pickles the result, scp's that back, and finally unpickles it into ret.
This API feels distasteful to me, I'd prefer to have:
ret = await remotely(func(x, y))
I could do this if I could pickle the coroutine object represented by func(x, y), but when I tried that, I get:
TypeError: can't pickle coroutine objects
So my alternate hope is that I can extract f, x, and y from f(x, y), hence this question.
So when you do ret = await remotely(func(x, y)), you actually construct the coroutine object for func. Fortunately, you can extract the information you need from coroutine objects, which you can send over for remote execution.
So first of all you can get the function name using the __qualname__ attribute. This will give you the fully qualified name, i.e. if the coroutine is nested, it will get you the full path to your function.
Next, you can extract the argument values from the frame object of the coroutine.
So this is how your remote function would look like
async def remote(cr):
# Get the function name
fname = cr.__qualname__
# Get the argument values
frame = cr.cr_frame
args = frame.f_locals # dict object
result = await ... # your scp stuff
return result
There is just one caveat. You should indicate that the function should be only used the way that you have posted, i.e.
ret = await remotely(func(x, y))
...in other words, the coroutine should be "fresh", and not half-way executed (which is almost not possible if you initiate it right passing it to remote). Otherwise, the f_locals value might include any other local variable that is defined before any awaits.
Found no clean solution. But we can find function as a referrer to coroutine's code object:
import gc
import inspect
def get_function_from_coroutine(coroutine: Coroutine) -> Callable:
referrers = gc.get_referrers(coroutine.cr_code)
return next(filter(lambda ref: inspect.isfunction(ref), referrers))
def get_kwargs_from_coroutine(coroutine: Coroutine) -> dict[str, Any]:
return coroutine.cr_frame.f_locals
async def foo(a: str, b: int):
return a * b
coro = foo("test", b=2)
print(get_function_from_coroutine(coro)) # <function foo at 0x7ff61ece9820>
print(get_kwargs_from_coroutine(coro)) # {'a': 'test', 'b': 2}

Is it ok for a Django mixin to inherit another mixin?

I'm pretty sure the answer to this question is obviously "NO", since Django mixins are supposed to
inherit "object"s, but I can't find an alternative solution to my problem :(
To make the question as simple as possible,,,
views.py
class JSONResponseMixin(object):
def render_to_response(self, context):
"Returns a JSON response containing 'context' as payload"
return self.get_json_response(self.convert_context_to_json(context))
def get_json_response(self, content, **httpresponse_kwargs):
"Construct an `HttpResponse` object."
return http.HttpResponse(content,
content_type='application/json',
**httpresponse_kwargs)
def convert_context_to_json(self, context):
"Convert the context dictionary into a JSON object"
# Note: This is *EXTREMELY* naive; in reality, you'll need
# to do much more complex handling to ensure that arbitrary
# objects -- such as Django model instances or querysets
# -- can be serialized as JSON.
return json.dumps(context)
class HandlingAJAXPostMixin(JSONResponseMixin):
def post(self, request, *args, **kwargs):
.....
data = {'somedata': somedata}
return JSONResponseMixin.render_json_response(data)
class UserDetailView(HandlingAJAXPostMixin, DetailView):
model = MyUser
.....
So the problem I have is that, for multiple Views, I want to respond to their "post" request with the same
JSON Response. That is why I defined the HandlingAJAXPostMixin so that I could reuse it for
other Views. Since the HandlingAJAXPostMixin returns a JSON response,
it requires a render_json_response method, which is defined in the JSONResponseMixin.
This is the reason why I am making my HandlingAJAXPostMixin inherit the JSONResponseMixin, but this obviously seems wrong :(..
Any suggestions..?
Thanks!!!
It's perfectly valid for a mixin to inherit from another mixin - in fact, this is how most of Django's more advanced mixins are made.
However, the idea of mixins is that they are reusable parts that, together with other classes, build a complete, usable class. Right now, your JSONResponseMixin might as well be a separate class that you don't inherit from, or the methods might just be module-wide methods. It definitely works, there's nothing wrong with it, but that's not the idea of a mixin.
If you look at Django's BaseDetailView, you see the following get() method:
def get(self, request, *args, **kwargs):
self.object = self.get_object()
context = self.get_context_data(object=self.object)
return self.render_to_response(context)
get_object() and get_context_data() are defined in the subclasses of BaseDetailView, but render_to_response() isn't. It's okay for mixins to rely on methods that it's superclasses don't define, this allows different classes that inherit from BaseDetailView to supply their own implementation of render_to_response(). Right now, in Django, there's only one subclass, though.
However, logic is delegated as much as possible to those small, reusable methods that the mixins supply. That's what you want to aim for. If/else logic is avoided as much as possible - the most advanced logic in Django's default views is:
if form.is_valid():
return self.form_valid(form)
else:
return self.form_invalid(form)
That's why very similar views, like CreateView and UpdateView are in fact two separate views, while they could easily be a single view with some additional if/else logic. The only difference is that CreateView does self.object = None, while UpdateView does self.object = self.get_object().
Right now you are using a DetailView that defines a get() method that returns the result of self.render_to_response(). However, you override render_to_response() to return a JSON response instead of a template-based HTML response. You're using a mixin that you don't what to use (SingleObjectTemplateResponseMixin) and then override it's behavior to do something that you don't want to do either, just to get the view doing what you want it to do. A better idea would be to write an alternative for DetailView who's only job is to supply a JSON response based on a single object. To do this, I would create a SingleObjectJSONResponseMixin, similar to the SingleObjectTemplateResponseMixin, and create a class JSONDetailView that combines all needed mixins into a single object:
class SingleObjectJSONResponseMixin(object):
def to_json(context):
return json.dumps(context)
def render_to_response(context, **httpresponse_kwargs):
return HttpResponse(self.to_json(context),
context_type='application/json',
**httpresponse_kwargs)
class BaseJSONDetailView(SingleObjectMixin, View):
# if you want to do the same for get, inherit just from BaseDetailView
def post(self, request, *args, **kwargs):
self.object = self.get_object()
context = self.get_context_data(object=self.object)
return render_to_response(context)
class JSONDetailView(SingleObjectJSONResponseMixin, BaseJSONDetailView):
"""
Return JSON detail data of a single object.
"""
Notice that this is almost exactly the same as the BaseDetailView and the SingleObjectTemplateResponseMixin provided by Django. The difference is that you define a post() method and that the rendering is much more simple with just a conversion to JSON of the context data, not a complete template rendering. However, logic is deliberately kept simple as much as possible, and methods that don't depend on each other are separated as much as possible. This way, SingleObjectJSONResponseMixin can e.g. be mixed with BaseUpdateView to easily create an AJAX/JSON-based UpdateView. Subclasses can easily override the different parts of the mixins, like overriding to_json() to supply a certain data structure. Rendering logic is where it belongs (in render_to_response()).
Now all you need to do to create a specific JSONDetailView is to subclass and define which model to use:
class UserJSONDetailView(JSONDetailView):
model = MyUser

Which closure implementation is faster between two examples

I'm writing some training material for the Groovy language and I'm preparing an example which would explain Closures.
The example is a simple caching closure for "expensive" methods, withCache
def expensiveMethod( Long a ) {
withCache (a) {
sleep(rnd())
a*5
}
}
So, now my question is: which of the two following implementations would be the fastest and more idiomatic in Groovy?
def withCache = {key, Closure operation ->
if (!cacheMap.containsKey(key)) {
cacheMap.put(key, operation())
}
cacheMap.get(key)
}
or
def withCache = {key, Closure operation ->
def cached = cacheMap.get(key)
if (cached) return cached
def res = operation()
cacheMap.put(key, res)
res
}
I prefer the first example, as it doesn't use any variable but I wonder if accessing the get method of the Map is slower than returning the variable containing the computed result.
Obviously the answer is "it depends on the size of the Map" but, out of curiosity, I would like to have the opinion of the community.
Thanks!
Firstly I agree with OverZealous, that worrying about two get operations is a premature optimization. The second exmaple is also not equal to the first. The first allows null for example, while the second on uses Groovy-Truth in the if, which means that null evals to false, as does for example an empty list/array/map. So if you want to show calling Closure I would go with the first one. If you want something more idiomatic I would do this instead for your case:
def expensiveMethod( Long a ) {
sleep(rnd())
a*5
}
def cache = [:].withDefault this.&expensiveMethod

Resources