Using MongoLab with Mongoose and models using ObjectIds - heroku

I'm using MongoLab add-on on Heroku.
My app use Mongoose and According to the docs the document id type is ObjectID (by default).
This is why my json looks something like that:
{
"__v" : 0,
"_id" : ObjectId("53c824d6f26327e00f9ae117"),
"company" : "53c824d6f26327e00f9ae118",
...
}
The problem: MongoLab addon does not khow how to parse the keyword "ObjectID", displaying an error message.
Am i missing something here? What can be done?

If you're referring to the JSON editor in the MongoLab web UI, it only accepts strict JSON formatting. For special types like ObjectId's and dates, you need to use their associated extended JSON format. For an ObjectId, that would look like:
{
"__v": 0,
"_id": {"$oid": "53c824d6f26327e00f9ae117"},
"company": "53c824d6f26327e00f9ae118",
...
}
Hopefully that helps! You can always feel free to write us at support#mongolab.com for any questions or issues.
Kind regards,
Sean#MongoLab

Related

How to make DFDL model to JSON data?

I learn App Connect Enterprise v11 and try to make a DFDL Schema for JSON data and I do not know how. I successfully made a Schema for Record-Oriented-Text like the below but do not know how to do it for JSON.
Record-Oriented-Text example:
Delivery+++XYZ123ABC+++My order was delivered in time, but the package was torn|C01-COM684a2da-384+++Your complaint has been received
JSON example:
{
"YourComplaint": {
"Type": "Delivery",
"Reference": "XYZ123ABC",
"Text": "My order was delivered in time, but the package was torn"
},
"Reply": {
"OurReference": "C01-COM684a2da-384",
"Text": "Your complaint has been received"
}
}
You should not create a DFDL schema for JSON data. ACE can parse JSON without any help from a schema. You should use the JSON parser/domain for JSON (just as you use the XMLNSC domain for XML).
If you need to output your record-oriented data as JSON then you need to map from InputRoot.DFDL to OutputRoot.JSON. You may also need to set some field types to ensure that the JSON data looks exactly how you need it.

Graphql type with id property that can have different values for same id

I was wondering if an object type that has an id property has to have the same content given the same id. At the moment the same id can have different content.
The following query:
const query = gql`
query products(
$priceSelector: PriceSelectorInput!
) {
productProjectionSearch(
priceSelector: $priceSelector
) {
total
results {
masterVariant {
# If you do the following it will work
# anythingButId: id
id
scopedPrice {
country
}
}
}
}
}
`;
If the PriceSelectorInput is {currency: "USD", country: "US"} then the result is:
{
"productProjectionSearch": {
"total": 2702,
"results": [
{
"name": "Sweater Pinko white",
"masterVariant": {
"id": 1,
"scopedPrice": {
"country": "US",
"__typename": "ScopedPrice"
},
"__typename": "ProductSearchVariant"
},
"__typename": "ProductProjection"
}
],
"__typename": "ProductProjectionSearchResult"
}
}
If the PriceSelectorInput is {currency: "EUR", country: "DE"} then the result is:
{
"productProjectionSearch": {
"total": 2702,
"results": [
{
"name": "Sweater Pinko white",
"masterVariant": {
"id": 1,
"scopedPrice": {
"country": "DE",
"__typename": "ScopedPrice"
},
"__typename": "ProductSearchVariant"
},
"__typename": "ProductProjection"
}
],
"__typename": "ProductProjectionSearchResult"
}
}
My question is that masterVariant of type ProductSearchVariant has id of 1 in both cases but different values for scopedPrice. This breaks apollo cache defaultDataIdFromObject function as demonstrated in this repo. My question is; is this a bug in apollo or would this be a violation of a graphql standard in the type definition of ProductSearchVariant?
TLDR
No it does not break the spec. The spec forces absolutely nothing in regards caching.
Literature for people that may be interested
From the end of the overview section
Because of these principles [... one] can quickly become productive without reading extensive documentation and with little or no formal training. To enable that experience, there must be those that build those servers and tools.
The following formal specification serves as a reference for those builders. It describes the language and its grammar, the type system and the introspection system used to query it, and the execution and validation engines with the algorithms to power them. The goal of this specification is to provide a foundation and framework for an ecosystem of GraphQL tools, client libraries, and server implementations -- spanning both organizations and platforms -- that has yet to be built. We look forward to working with the community in order to do that.
As we just saw the spec says nothing about caching or implementation details, that's left out to the community. The rest of the paper proceeds to give details on how the type-system, the language, requests and responses should be handled.
Also note that the document does not mention which underlying protocol is being used (although commonly it's HTTP). You could effectively run GraphQL communication over a USB device or over infra-red light.
We hosted an interesting talk at our tech conferences which you might find interesting. Here's a link:
GraphQL Anywhere - Our Journey With GraphQL Mesh & Schema Stitching • Uri Goldshtein • GOTO 2021
If we "Ctrl+F" ourselves to look for things as "Cache" or "ID" we can find the following section which I think would help get to a conclusion here:
ID
The ID scalar type represents a unique identifier, often used to refetch an object or as the key for a cache. The ID type is serialized in the same way as a String; however, it is not intended to be human‐readable. While it is often numeric, it should always serialize as a String.
Result Coercion
GraphQL is agnostic to ID format, and serializes to string to ensure consistency across many formats ID could represent, from small auto‐increment numbers, to large 128‐bit random numbers, to base64 encoded values, or string values of a format like GUID.
GraphQL servers should coerce as appropriate given the ID formats they expect. When coercion is not possible they must raise a field error.
Input Coercion
When expected as an input type, any string (such as "4") or integer (such as 4) input value should be coerced to ID as appropriate for the ID formats a given GraphQL server expects. Any other input value, including float input values (such as 4.0), must raise a query error indicating an incorrect type.
It mentions that such field it is commonly used as a cache key (and that's the default cache key for the Apollo collection of GraphQL implementations) but it doesn't tell us anything about "consistency of the returned data".
Here's the link for the full specification document for GraphQL
Warning! Opinionated - My take on ID's
Of course all I am about to say has nothing to do with the GraphQL specification
Sometimes an ID is not enough of a piece of information to decide whether to cache something. Let's think about user searches:
If I have a FavouriteSearch entity that has an ID on my database and a field called textSearch. I'd commonly like to expose a property results: [Result!]! on my GraphQL specification referencing all the results that this specific text search yielded.
These results are very likely to be different from the moment I make the search or five minutes later when I revisit my favourite search. (Thinking about a text-search on a platform such as TikTok where users may massively upload content).
So based on this definition of the entity FavouriteSearch it makes sense that the caching behavior is rather unexpected.
If we think of the problem from a different angle we might want a SearchResults entity which could have an ID and a timestamp and have a join-table where we reference all those posts that were related to the initial text-search and in that case it would make sense to return a consistent content for the property results on our GraphQL schema.
Thing is that it depends on how we define our entities and it's ultimately not related to the GraphQL spec
A solution for your problem
You can specify how Apollo generates the key for later use as key on the cache as #Matt already pointed in the comments. You may want to tap into that and override that behavior for those entitites that have a __type equal to your masterVariant property type and return NO_KEY for all of them (or similar) in order to avoid caching from your ApolloClient on those specific fields.
I hope this was helpful!

How to generate types.json in substrate

In polkadot-js has provided for developer to define custom types in the pallet, so that polkadot-ui can understand those types (it means can use some underlying API polkadot-js). These types are defined using the json format. This is example
{
"TransactionInput": {
"parent_output": "Hash",
"signature": "Signature"
},
"TransactionOutput": {
"value": "u128",
"pubkey": "Hash",
"sale": "u32"
},
"Transaction": {
"inputs": "Vec<TransactionInput>",
"outputs": "Vec<TransactionOutput>"
}
}
I see that in substrate-node-template/scripts has aggregrate_types.js file that generate types.json. I dont know how to generate it automaticly or I should write by hand.
Example that, in my pallet that i have defined enum RoleID and struct Role. But in UI it doesn't understand what RoleID is. Can you explain more clearly? I believe that it can be related to define types.json.
https://github.com/polkadot-js/apps/blob/master/packages/page-settings/src/md/basics.md#developer
aggregrate_types.json:
Thanks!!!
Presently, generating this by hand is the best way following the docs here. There are not clean ways to automatically generate this to my knowlage, but soon you will not need to worry about at all it once this PR lands in Substrate!
Thanks to https://github.com/paritytech/substrate/pull/8615, you don't have to manually write types.json anymore.
Make sure the metadata version of your node is v14 or higher. Otherwise you need to upgrade your substrate version to make it automagically work for you.

Where can I get a list of categories?

This is not so much a specific programming related question but more on the planning part.
So I have posts in my DB that look like the below:
{
"_id": "57f88bb94b5342b2025d5646",
"postID": "12345",
"profileID": "12345678",
"title": "testT",
"description": "testD",
"views": 0,
"dateCreated": "2016-10-08 06:01:29",
"categories": [],
"__v": 0
}
Now, I am creating the front-end part for it and I need to create the categories from a dropdown list or something similar.
I know I can just populate it or get it from from the DB but where does the list actually come from? I mean say I need to create a post about Lamborghini, it comes under brand and automobile categories.
What I am essentially after is a place where I can grab these categories and their sub-categories.
Does anyone know of such a service/API? Is there another way?
By the way, I am on the MEAN Stack.
Thanks,
Shayan
In the end, I just created the list from all sorts of different sites and made it into one list.
Sadly, it looks like there is no such API service for this.

Carrot2+ElasticSearch Basic Flow of Information

I am using Carrot2 and ElasticSearch. I has elastic search server running with a lot of data when I installed carrot2 plugin.
Wanted to get answers to a few basic questions:
Will clustering work only on newly indexed documents or even old documents?
How can I specify which fields to look at for clustering?
The curl command is working and giving some results. How can I get the curl command which takes a JSON as input to a REST API url of the form localhost:9200/article-index/article/_search_with_clusters?.....
Appreciate any help.
Yes, if you want to use the plugin straight off the ES installation, you need to make REST calls of your own. I believe you are using Python. Take a look at requests. It is a delightful REST tool for python.
To make POST requests you can do the following :
import json
url = 'localhost:9200/article-index/article/_search_with_clusters'
payload = {'some': 'data'}
r = requests.post(url, data=json.dumps(payload))
print r.text
Find more information at requests documentation.
Will clustering work only on newly indexed documents or even old
documents?
It will work even on old documments
How can I specify which fields to look at for clustering?
Here's an example using the shakepspeare dataset. The query is which of shakespeare's plays are about war?
$ curl -XPOST http://localhost:9200/shakespeare/_search_with_clusters?pretty -d '
{
"search_request": {
"query": {"match" : { "_all": "war" }},
"size": 100
},
"max_hits": 0,
"query_hint": "war",
"field_mapping": {
"title": ["_source.play_name"],
"content": ["_source.text_entry"]
},
"algorithm": "lingo"
}'
Running this you'll get back plays like Richard, Henry... The title is what carrot2 uses to develop the cluster names and the text entry is what it uses to make the clusters.
The curl command is working and giving some results. How can I get the
curl command which takes a JSON as input to a REST API url of the form
localhost:9200/article-index/article/_search_with_clusters?.....
Typically use the elasticsearch client libraries for your language of choice.

Resources