I have a simple view that I'm using to experiment with AJAX.
def get_shifts_for_day(request,year,month,day):
data= dict()
data['d'] =year
data['e'] = month
data['x'] = User.objects.all()[2]
return HttpResponse(simplejson.dumps(data), mimetype='application/javascript')
This returns the following:
TypeError at /sched/shifts/2009/11/9/
<User: someguy> is not JSON serializable
If I take out the data['x'] line so that I'm not referencing any models it works and returns this:
{"e": "11", "d": "2009"}
Why can't simplejson parse my one of the default django models? I get the same behavior with any model I use.
You just need to add, in your .dumps call, a default=encode_myway argument to let simplejson know what to do when you pass it data whose types it does not know -- the answer to your "why" question is of course that you haven't told poor simplejson what to DO with one of your models' instances.
And of course you need to write encode_myway to provide JSON-encodable data, e.g.:
def encode_myway(obj):
if isinstance(obj, User):
return [obj.username,
obj.firstname,
obj.lastname,
obj.email]
# and/or whatever else
elif isinstance(obj, OtherModel):
return [] # whatever
elif ...
else:
raise TypeError(repr(obj) + " is not JSON serializable")
Basically, JSON knows about VERY elementary data types (strings, ints and floats, grouped into dicts and lists) -- it's YOUR responsibility as an application programmer to match everything else into/from such elementary data types, and in simplejson that's typically done through a function passed to default= at dump or dumps time.
Alternatively, you can use the json serializer that's part of Django, see the docs.
Related
I'd like to convert a json into the data type that is supported by Azure Form Recognizer. I'm able to convert the data type into a dic and then into a json but I'm not able to do the opposite without analysing once again the document. How could I use the data type supported by Azure Form Recognizer without having to analyse the document more than one time?
Here is what I have.
endpoint = "endpoint"
key = "key"
# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
# Extract text from doc using "prebuilt-document"
with open("file.pdf", "rb") as f:
poller = document_analysis_client.begin_analyze_document(
"prebuilt-document", document=f)
result = poller.result()
import json
form_pages = poller.result()
d = form_pages.to_dict()
json_string = json.dumps(d)
print(json_string)
data = json.loads(json_string)
poller1 = form_pages.from_dict(data)
What's the scenario for converting the JSON model representation back to the SDK model? The operations don't take the result models as an input, as a solution the original result model be stored somewhere until it needs to be used again in that case.
Also, for converting the model to JSON, it would be better to use the AzureJSONEncoder to that the SDK can properly serialize all types. For example:
from azure.core.serialization import AzureJSONEncoder
# save the dictionary as JSON content in a JSON file, use the AzureJSONEncoder
# to help make types, such as dates, JSON serializable
# NOTE: AzureJSONEncoder is only available with azure.core>=1.18.0.
with open('data.json', 'w') as f:
json.dump(analyze_result_dict, f, cls=AzureJSONEncoder)
Here is a link to the full sync sample: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_convert_to_and_from_dict.py
It's a pretty standard task in Django REST Framework to supply additional args/kwargs to a serializer to set values of fields set not via request.data, but via the value in url parameters or cookies. For instance, I need to set user field of my Comment model equal to request.user upon POST request. Those additional arguments are called context.
Several questions (1, 2) on StackOverflow suggest that I override get_serializer_context() method of my ModelViewSet. I did and it doesn't help. I tried to understand, what's wrong, and found out that I don't understand from the source code, how this context system is supposed to work in general. (documentation on this matter is missing, too)
Can anyone explain, where serializer adds context to normal request data? I found two places, where it saves the values from context.
serializer.save(), method, which mixes kwargs with validated data, but it is usually called with no arguments (e.g. by ModelMixins).
fields.__new__(), which caches args and kwargs, but it seems that nobody ever reads them later.
Whenever you use generic views or viewsets, DRF(3.3.2) adds request object, view object and format to the serializer context. You can use serializer.context to access, lets say request.user in the serializer.
This is added when get_serializer_class() is called. Inside that, it calls get_serializer_context() method where all these parameters are added to its context.
DRF source code for reference:
class GenericAPIView(views.APIView):
"""
Base class for all other generic views.
"""
def get_serializer(self, *args, **kwargs):
"""
Return the serializer instance that should be used for validating and
deserializing input, and for serializing output.
"""
serializer_class = self.get_serializer_class()
kwargs['context'] = self.get_serializer_context()
return serializer_class(*args, **kwargs)
def get_serializer_context(self):
"""
Extra context provided to the serializer class.
"""
return {
'request': self.request,
'format': self.format_kwarg,
'view': self
}
to set values of fields set not via request.data, but via the value in url parameters or cookies. For instance, I need to set user field of my Comment model equal to request.user upon POST request.
This is how I handle both cases in my ModelViewSet:
def perform_create(self, serializer):
# Get article id from url e.g. http://myhost/article/1/comments/
# obviously assumes urls.py is setup right etc etc
article_pk = self.kwargs['article_pk']
article = get_object_or_404(Article.objects.all(), pk=article_pk)
# Get user from request
serializer.save(author=self.request.user, article=article)
Unfortunately the nested objects is not standard for DRF but that's besides the point. :)
I would like to better understand how Resolv::DNS handles records that are not directly supported. These records are represented by the Resolv::DNS::Resource::Generic class, but I could not find documentation about how to get the data out of this record.
Specifically, my zone will contain SSHFP and TLSA records, and I need a way to get to that data.
Through reverse engineering, I found the answer - documenting it here for others to see.
Please note that this involves undocumented features of the Resolv::DNS module, and the implementation may change over time.
Resource Records that the Resolv::DNS module does not understand are represented not through the Generic class, but rather through a subclass whose name represents the type and class of the DNS response - for instance, an SSHFP record (type 44) will be represented as Resolv::DNS::Resource::Generic::Type44_Class1
The object will contain a method "data" that gives you access to the RDATA of the record, in plain binary format.
Thus, to access an SSHFP record, here is how to get it:
def handle_sshfp(rr) do
# the RDATA is a string but contains binary data
data = rr.data.bytes
algo = data[0].to_s
fptype = data[1].to_s
fp = data[2..-1].to_s
hex = fp.map{|b| b.to_s(16).rr.rjust(2,'0') }.join(':')
puts "The SSHFP record is: #{fptype} #{algo} #{hex}"
end
Resolv::DNS.open do |dns|
all_records = dns.getresources('myfqdn.example.com', Resolv::DNS::Resource::IN::ANY ) rescue nil
all_records.each do |rr|
if rr.is_a? Resolv::DNS::Resource::Generic then
classname = rr.class.name.split('::').last
handle_sshfp(rr) if classname == "Type44_Class1"
end
end
end
I'm using TurboGears 2.3 and working on validating forms with formencode and need some guidance
I have a form which covers 2 different objects. They are a almost the same, but with some difference
When i submit my form, I want to validate 2 things
Some basic data
Some specific data for the specific object
Here are my schemas:
class basicQuestionSchema(Schema):
questionType = validators.OneOf(['selectQuestion', 'yesNoQuestion', 'amountQuestion'])
allow_extra_fields = True
class amount_or_yes_no_question_Schema(Schema):
questionText = validators.NotEmpty()
product_id_radio = object_exist_by_id(entity=Product, not_empty=True)
allow_extra_fields = True
class selectQuestionSchema(Schema):
questionText = validators.NotEmpty()
product_ids = validators.NotEmpty()
allow_extra_fields = True
And here are my controller's methods:
#expose()
#validate(validators=basicQuestionSchema(), error_handler=questionEditError)
def saveQuestion(self,**kw):
type = kw['questionType']
if type == 'selectQuestion':
self.save_select_question(**kw)
else:
self.save_amount_or_yes_no_question(**kw)
#validate(validators=selectQuestionSchema(),error_handler=questionEditError)
def save_select_question(self,**kw):
...
Do stuff
...
#validate(validators=amount_or_yes_no_question_Schema(),error_handler=questionEditError)
def save_amount_or_yes_no_question(self,**kw):
...
Do other stuff
...
What I wanted to do was validate twice, with different schemas. This doesn't work, as only the first #validate is validated, and the other are not (maybe ignored)
So, what am i doning wrong?
Thanks for the help
#validate is part of the request flow, so when manually calling a controller it is not executed (it is not a standard python decorator, all TG2 decorators actually only register an hook using tg.hooks so they are bound to request flow).
What you are trying to achieve should be done during validation phase itself, you can then call save_select_question and save_amount_or_yes_no_question as plain object methods after validation.
See http://runnable.com/VF_2-W1dWt9_fkPr/conditional-validation-in-turbogears-for-python for a working example of conditional validation.
I'm submitting tasks using a Load Balanced View.
I would like to be able to connect from a different client and view the remaining tasks by the function and parameters that were submitted.
Forexample:
def someFunc(parm1, parm2):
return parm1 + parm2
lbv = client.load_balanced_view()
async_results = []
for parm1 in [0,1,2]:
for parm2 in [0,1,2]:
ar = lbv.apply_async(someFunc, parm1, parm2)
async_results.append(ar)
From the client I submitted this from I can figure out which result went with which function call based on their order in the async_results array.
What I would like to know is how can I figure out the function and parameters associated with a msg_id if I am retrieving the results from a different client using the queue_status or history commands to get msg_id's and the client.get_result command to retrieve the results.
These things are pickled, and stored in the 'buffers' in the hub's database. If you want to look at them, you have to fetch those buffers from the database, and unpack them.
Assuming you have a list of msg_ids, here is a way that you can reconstruct the f, args, and kwargs for all of those requests:
# msg_ids is a list of msg_id, however you decide to get that
from IPython.zmq.serialize import unpack_apply_message
# load the buffers from the hub's database:
query = rc.db_query({'msg_id' : {'$in' : msg_ids } }, keys=['msg_id', 'buffers'])
# query is now a list of dicts with two keys - msg_id and buffers
# now we can generate a dict by msg_id of the original function, args, and kwargs:
requests = {}
for q in query:
msg_id =
f, args, kwargs = unpack_apply_message(q['buffers'])
requests[q['msg_id']] = (f, args, kwargs)
From this, you should be able to associate tasks based on their function and args.
One Caveat: since f has been through pickling, often the comparison f is original_f will be False, so you have to do looser comparisons, such as f.__module__ + f.__name__ or similar.
For a bit more detail, here is an example that generates some requests,
then reconstructs and associates them based on the function and arguments having some prior knowledge of what the original requests may have looked like.