Convert multiple querysets to json in django - ajax

I asked a related question earlier today
In this instance, I have 4 queryset results:
action_count = Action.objects.filter(complete=False, onhold=False).annotate(action_count=Count('name'))
hold_count = Action.objects.filter(onhold=True, hold_criteria__isnull=False).annotate(action_count=Count('name'))
visible_tags = Tag.objects.filter(visible=True).order_by('name').filter(action__complete=False).annotate(action_count=Count('action'))
hidden_tags = Tag.objects.filter(visible=False).order_by('name').filter(action__complete=False).annotate(action_count=Count('action'))
I'd like to return them to an ajax function. I have to convert them to json, but I don't know how to include multiple querysets in the same json string.

I know this thread is old, but using simplejson to convert django models doesn't work for many cases like decimals ( as noted by rebus above).
As stated in the django documentation, serializer looks like the better choice.
Django’s serialization framework provides a mechanism for
“translating” Django models into other formats. Usually these other
formats will be text-based and used for sending Django data over a
wire, but it’s possible for a serializer to handle any format
(text-based or not).
Django Serialization Docs

You can use Django's simplejson module. This code is untested though!
from django.utils import simplejson
dict = {
'action_count': list(Action.objects.filter(complete=False, onhold=False).annotate(action_count=Count('name')).values()),
'hold_count': list(Action.objects.filter(onhold=True, hold_criteria__isnull=False).annotate(action_count=Count('name')).values()),
...
}
return HttpResponse( simplejson.dumps(dict) )
I'll test and rewrite the code as necessary when I have the time to, but this should get you started.

Related

Azure Forms Recognizer - Saving output results SDK Python

When I used the API from Forms Recognizer, it returned a JSON file. Now, I am using Form Recognizer with SDK and Python, and it returns a data type that seems to be specific from the library azure.ai.formrecognizer.
Does anyone know how to save the data acquired from Form Recognizer SDK Python in a JSON file like the one received from Form Recognzier API?
from azure.ai.formrecognizer import FormRecognizerClient
from azure.identity import ClientSecretCredential
client_secret_credential = ClientSecretCredential(tenant_id, client_id, client_secret)
form_recognizer_client = FormRecognizerClient(endpoint, client_secret_credential)
with open(os.path.join(path, file_name), "rb") as fd:
form = fd.read()
poller = form_recognizer_client.begin_recognize_content(form)
form_pages = poller.result()
Thanks for your question! The Azure Form Recognizer SDK for Python provides helper methods like to_dict and from_dict on the models to facilitate converting the data type in the library to and from a dictionary. You can use the dictionary you get from the to_dict method directly or convert it to JSON.
For your example above, in order to get a JSON output you could do something like:
poller = form_recognizer_client.begin_recognize_content(form)
form_pages = poller.result()
d = [page.to_dict() for page in form_pages]
json_string = json.dumps(d)
I hope that answers your question, please let me know if you need more information related to the library.
Also, there's more information about our models and their methods on our documentation page here. You can use the dropdown to select the version of the library that you're using.

LightGBM 'Using categorical_feature in Dataset.' Warning?

From my reading of the LightGBM document, one is supposed to define categorical features in the Dataset method. So I have the following code:
cats=['C1', 'C2']
d_train = lgb.Dataset(X, label=y, categorical_feature=cats)
However, I received the following error message:
/app/anaconda3/anaconda3/lib/python3.7/site-packages/lightgbm/basic.py:1243: UserWarning: Using categorical_feature in Dataset.
warnings.warn('Using categorical_feature in Dataset.')
Why did I get the warning message?
I presume that you get this warning in a call to lgb.train. This function also has argument categorical_feature, and its default value is 'auto', which means taking categorical columns from pandas.DataFrame (documentation). The warning, which is emitted at this line, indicates that, despite lgb.train has requested that categorical features be identified automatically, LightGBM will use the features specified in the dataset instead.
To avoid the warning, you can give the same argument categorical_feature to both lgb.Dataset and lgb.train. Alternatively, you can construct the dataset with categorical_feature=None and only specify the categorical features in lgb.train.
Like user andrey-popov described you can use the lgb.train's categorical_feature parameter to get rid of this warning.
Below is a simple example with some code how you could do it:
# Define categorical features
cat_feats = ['item_id', 'dept_id', 'store_id',
'cat_id', 'state_id', 'event_name_1',
'event_type_1', 'event_name_2', 'event_type_2']
...
# Define the datasets with the categorical_feature parameter
train_data = lgb.Dataset(X.loc[train_idx],
Y.loc[train_idx],
categorical_feature=cat_feats,
free_raw_data=False)
valid_data = lgb.Dataset(X.loc[valid_idx],
Y.loc[valid_idx],
categorical_feature=cat_feats,
free_raw_data=False)
# And train using the categorical_feature parameter
lgb.train(lgb_params,
train_data,
valid_sets=[valid_data],
verbose_eval=20,
categorical_feature=cat_feats,
num_boost_round=1200)
This is less of an answer to the original OP and more of an answer to people who are using sklearn API and encounter this issue.
For those of you who are using sklearn API, especially using one of the cross_val methods from sklearn, there are two solutions you could consider using.
Sklearn API solution
A solution that worked for me was to cast categorical fields into the category datatype in pandas.
If you are using pandas df, LightGBM should automatically treat those as categorical. From the documentation:
integer codes will be extracted from pandas categoricals in the
Python-package
It would make sense for this to be the equivalent in the sklearn API to setting categoricals in the Dataset object.
But keep in mind that LightGBM does not officially support virtually any of the non-core parameters for sklearn API, and they say so explicitly:
**kwargs is not supported in sklearn, it may cause unexpected issues.
Adaptive Solution
The other, more sure-fire solution to being able to use methods like cross_val_predict and such is to just create your own wrapper class that implements the core Dataset/Train under the hood but exposes a fit/predict interface for the cv methods to latch onto. That way you get the full functionality of lightGBM with only a little bit of rolling your own code.
The below sketches out what this could look like.
class LGBMSKLWrapper:
def __init__(self, categorical_variables, params):
self.categorical_variables = categorical_variables
self.params = params
self.model = None
def fit(self, X, y):
my_dataset = ltb.Dataset(X, y, categorical_feature=self.categorical_variables)
self.model = ltb.train(params=self.params, train_set=my_dataset)
def predict(self, X):
return self.model.predict(X)
The above lets you load up your parameters when you create the object, and then passes that onto train when the client calls fit.

How to send a list of objects in Haxe between client and server?

I am trying to write an online message board in Haxe (OpenFL). There are lots of server/client examples online. But I am new to this area and I do not understand any of them. What is the easiest way to send a list of objects between server and client? Could you guys give an example?
You could use JSON
You can put this in your openFL project (client):
var myData = [1,2,3,4,5];
var http = new haxe.Http("server.php");
http.addParameter("myData", haxe.Json.stringify(myData));
http.onData = function(resultData) {
trace('the data is send to server, this is the response:' + resultData);
}
http.request(true);
If you have a server.php file, you can access the data like this:
$myData = json_decode($_POST["myData"]);
If the server returns Json data which needs to be read in the client, then in Haxe you need to do haxe.Json.parse(resultData);
EDIT: I'm not still sure if the user's problem is really about sending "a list of objects"; see comment to the question...
The easiest way is to use Haxe Serialization, either with Haxe Remoting or with your own protocol on top of TCP/UDP. The choice of protocol depends whether you already have something built and whether you will be calling functions or simply getting/posting data.
In either case, haxe.Serializer/Unserializer will give you a format to transmit most (if not all) Haxe objects from client to server with minimal code.
See the following minimal example (from the manual) on how to use the serialization APIs. The format is string based and specified.
import haxe.Serializer;
import haxe.Unserializer;
class Main {
static function main() {
var serializer = new Serializer();
serializer.serialize("foo");
serializer.serialize(12);
var s = serializer.toString();
trace(s); // y3:fooi12
var unserializer = new Unserializer(s);
trace(unserializer.unserialize()); // foo
trace(unserializer.unserialize()); // 12
}
}
Finally, you could also use other serialization formats like JSON (with haxe.Json.stringify/parse) or XML, but they wouldn't be so convenient if you're dealing with enums, class instances or other data not fully supported by these formats.

Mongoengine Django Rest Framework - Serializer Error - ReferenceField is not JSON serializable

Everything works great until the ObjectID value of the ReferenceField no longer points to a valid document. Then The ObjectID is left as the value, and json doesn't know how to serialize this.
How do I deal with invalid ReferenceFields?
E.g.
class Food(Document):
name = StringField()
owner = ReferenceField("Person")
class Person(Document):
first_name = StringField()
last_name = StringField()
...
p = Person(...)
apple = Food(name="apple", owner=p)
p.delete() # might be the wrong method, but you get the idea
At this point, attempting to fetch a list of foods via the REST API will fail with the is not JSON serializable error, since apple.owner no longer points to an owner that exists.
Since you are using DRF with mongoengine, you must be using django-rest-framework-mongoengine.
Apparenly, its a bug in django-rest-framework-mongoengine. Check this open issue on Github which was reported recently regarding the same.
https://github.com/umutbozkurt/django-rest-framework-mongoengine/issues/91
One way is to write your own JSONEncoder for this. This link might help.
Another option is to use the json_util library of Pymongo. They provide explicit BSON conversion to and from json.
As per json-util docs:
This module provides two helper methods dumps and loads that wrap the
native json methods and provide explicit BSON conversion to and from
json. This allows for specialized encoding and decoding of BSON
documents into Mongo Extended JSON‘s Strict mode. This lets you encode
/ decode BSON documents to JSON even when they use special BSON types.

How to import tweets with only the hashtag?

I'm looking to write a script to import tweets by only grabbing tweets with the designated hashtag. And I'm a little confused on how I would go about doing that. Ideally I would like just the raw data. Nothing fancy.
I tried working with tweetstream in Ruby but I see the majority of Twitter API is in JSON, which I'm not too familiar with. Anyways, does anyone have an idea of how I can simply import tweets based on a hashtag? Maybe stream it in my terminal?
JSON is the foundation of many web APIs. I think the general consensus would be...
Learn JSON
It really is not that hard of a notation. Here is a tutorial specifically on JSON in Ruby.
Example:
require 'json'
json_result = "{'tweet': '#awesomesauce is for the #winner'}" # From Twitter API
results = JSON.parse(json_result)
# => {:tweet => "#awesomesauce is for the #winner"}

Resources