What are the valid data types for API Blueprint? - apiblueprint

I am just wondering exactly what the valid data types are when working with API Blueprints?
The documentation seems unclear to me. More specifically, it says:
type is the optional parameter type as expected by the API (e.g. "number", "string", "boolean"). "string" is the default.
Does this mean:
Only "number", "string" and "boolean" are valid?
It is expecting JSON primitive types?
Other?

Related

How to convert json to collection in power apps

I have a power app that using the flow from power automate.
My flow is doing an HTTP get and respond a JSON to power apps like below.
Here is the JSON as text:
{"value": "[{\"dataAreaId\":\"mv\",\"AccountNum\":\"100000\",\"Name\":\"*****L FOOD AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100001\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100014\",\"Name\":\"****(SEB)\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100021\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100029\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500100\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500210\",\"Name\":\"****\"}]"}
But when I try to convert this JSON to the collection, It doesn't behave like a list.
It just seems like a text. Here is how I try to bind the list.
How can I create a collection from JSON to bind to the gallery view?
I found the solution. I finally create a collection from the response of flow.
The flow's name is GetVendor.
The response of flow is like this :
{"value": "[{\"dataAreaId\":\"mv\",\"AccountNum\":\"100000\",\"Name\":\"*****L FOOD AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100001\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100014\",\"Name\":\"****(SEB)\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100021\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100029\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500100\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500210\",\"Name\":\"****\"}]"}
Below code creates a list from this response :
ClearCollect(_vendorData, MatchAll(GetVendors.Run(_token.value).value, "\{""dataAreaId"":""(?<dataAreaId>[^""]*)"",""AccountNum"":""(?<AccountNum>[^""]*)"",""Name"":""(?<Name>[^""]*)""\}"));
And I could bind the accountnum and name from _vendorDatra collection to the gallery view
In my case I had the same issue as you, but couldn't manage to get data into _vendorData collection, because MatchAll regex part was not working correctly, even if I had exactly the same scenario and I could not make it work.
My solution was to modify the flow itself, where I returned Response instead of Respond to a Power app or Flow, so basically I could return full request from Http.
This caused me some issues also, because when I generated schema from sample I could not register the flow to the powerapp with the error Failed during http send request.
The solution was to manually review the response schema and change all column types to one of the following three, because other are not supported: string, integer or boolean. Object and array can be set only on top level items, but never on children, so if you have anything else than my mentioned three, replace it to string. And no property can be left with undefined type.
Basically I like this solution even more, because in powerapps itself you do not need to do any conversion or anything - simply use the data as is, because it is already recognized as collection in case of array and you have all the properties already named for you.
Response step schema example is below.
{
"type": "object",
"properties": {
"PropertyOne": {
"type": "string"
},
"PropertyTwo": {
"type": "integer"
},
"PropertyThree": {
"type": "boolean"
},
"PropertyFour": {
"type": "array",
"items": {
"type": "object",
"properties": {
"PropertyArray1": {
"type": "string"
},
"PropertyArray1": {
"type": "integer"
},
"PropertyArray1": {
"type": "boolean"
}
}
}
It is easy now.
Power Apps introduced ParseJSON function which helps converting string to collection easily.
Table(ParseJSON(JSONString));
In gallery, map columns like - ThisItem.Value.ColumnName

Tool to automatically generate documentation on elastic index

I have a project on Kibana/Elastic. I can see / manipulate the indices and see the fields and value types with GET <index>/_mapping.
Other members (in particular, managers) in my team do not have access to Kibana and I need to write some documentation for them. Basically, I need to give them a view on what's in the indices.
I find myself copy pasting and simplifying (removing some not so informative layers) the json-like output of GET <index>/_mapping. That's not a good process.
Is there a tool that automates this, and ensures synchronisation between the db and the documentation?
I don't know of any tool that automates this. The simplest way, IMO, would be to create a single-page webapp which connects to ES and calls
GET _all/_mapping?format=yaml
which will return something like
myindex:
mappings:
properties:
date1:
type: "date"
date2:
type: "date"
date3:
type: "date"
status:
type: "text"
fields:
keyword:
type: "keyword"
ignore_above: 256
which is already more readable than JSON.
Going one step further, you could add a multi select dropdown to filter for specific fields, i.e.:
GET _all/_mapping/field/name,color?format=yaml
which would return something along the lines of
online_shop:
mappings:
color:
full_name: "color"
mapping:
color:
type: "keyword"
name:
full_name: "name"
mapping:
name:
type: "text"
fields:
keyword:
type: "keyword"

Dynamic Template not working for short, byte & float

I am trying to create a template, in my template I am trying to achieve the dynamic mapping.
Here is what I wrote, as in 6.2.1 the only boolean, date, double, long, object, string are automatically detected, facing issues for mapping the float, short & byte.
Here if I index 127, it will be mapped to short from the short_fields, it's fine, but when I index some 325566, I am getting exception Numeric value (325566) out of range of Java short, I want to suppress this and let long_fields, should take care about this & it should be mapped to long. I have tried with coerce:false, ignore_malformed:true, none of them worked as expected.
"dynamic_templates": [
{
"short_fields": {
"match": "*",
"match_mapping_type": "long",
"mapping": {
"type": "short",
"doc_values": true
}
}
},
{
"long_fields": {
"match": "*",
"match_mapping_type": "long",
"mapping": {
"type": "long",
"doc_values": true
}
}
},
{
"byte_fields": {
"match": "*",
"match_mapping_type": "byte",
"mapping": {
"type": "byte",
"doc_values": true
}
}
}
]
Unfortunately, it is not possible to make Elasticsearch choose the smallest data type possible for you. There are plenty of workarounds, but let me first explain why it does not work.
Why it does not work?
Dynamic mapping templates allow to override default dynamic type matching in three ways:
by matching the name of the field,
by matching the type Elasticsearch have guessed for you,
and by a path in the document.
Elasticsearch picks the first matching rule that works. In your case, the first rule, short_fields, always works for any integer, because it accepts any field name and a guessed type long.
That's why it works for 127 but doesn't work for 325566.
To illustrate better this point, let's change "matching_mapping_type" in the first rule like this:
"match_mapping_type": "short",
Elasticsearch does not accept it and returns an error:
{
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [doc]: No field type matched on [short], \
possible values are [object, string, long, double, boolean, date, binary]"
}
But how can we make Elasticsearch pick the right types?
Here are some of the options.
Define strict mapping manually
This gives you full control over the selection of types.
Use the default long
Postpone "shrinking" data until it starts being a performance problem.
In fact, using smaller data types will only affect searching/indexing performance, not the storage required. As long as you are fine with dynamic mappings, Elasticsearch manages them for you pretty well.
Mark field names with type information
Since Elasticsearch is not able to tell a byte from long, you can determine the type beforehand and add type information in the field name, like customerAge_byte or revenue_long.
Then you will be able to use a prefix/suffix match like this:
{
"bytes_as_longs": {
"match_mapping_type": "long",
"match": "*_byte",
"mapping": {
"type": "byte"
}
}
}
Please choose the approach that fit your needs better.
Why Elasticsearch takes longs
The reason why Elasticsearch takes longs for any integer input is probably coming from the JSON definition of a number type (as shown at json.org):
It is not possible to tell if a number 0 or 1 is actually integer or long in the entire dataset. Elasticsearch has to guess the correct type from the first example shown, and it takes the safest shot possible.
Hope that helps!

What is `_code` in `profile-types.json`

for example snapshot/element[2] contains
"type": [
{
"fhir_comments": [
"Note: primitive values do not have an assigned type\r\n e.g. this is compiler magic\r\n XML and JSON types provided by extension"
],
"_code": {
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/structuredefinition-json-type",
"valueString": "string"
},
{
"url": "http://hl7.org/fhir/StructureDefinition/structuredefinition-xml-type",
"valueString": "xs:string"
}
]
}
}
]
As far as I know there is no property _code defined for StructureDefinition
What's the correct way to treat and interpret this property?
This is the "code" element. _code is used to convey complex children on primitive data types (i.e. the id element or extensions or modifier extensions). This is defined in the specification here. The representation is done this way so you can reference primitive elements by just saying something like Patient.birthDate or Patient.gender instead of Patient.birthDate.value or Patient.gender.value. In order to allow that, we needed to provide a convention for accessing extensions and other elements which are rare, but can still be present on 'primitive' data types.

Do document field expression functions work in expression scripts?

In elasticsearch (version 1.4.3), when using script_score via lucene expression lang, I always get a QueryParsingException when trying any of the expression field functions, such as:
doc['field_name'].distance(lat, lon),
doc['field_name'].distanceWithDefault(lat, lon, default), and
doc['field_name'].geohashDistanceInKm(geohash).
see the list of Expressions that the docs say are supported.
Notably, the value expressions are accepted (though I have not tested if their values are correct). So scripts that make mention of doc['field_name'].value are accepted -- at least by the parser.
Given the security vulnerability of Groovy, a lot of hosted elasticsearch providers (such as bonsai) have turned off support for the Groovy language feature. My thinking is that none of the functions have been bound for the expression lang.
Example:
"script_score" : {
"lang" : "expression",
"script": "exp(pow(doc['location'].geohashDistanceInKm(geohash), 2) / (pow(scale, 2) / ln(decay)))",
"params": {
"geohash" : "9q8yy",
"scale" : "0.3",
"decay" : "0.1"
}
}
Gives me an unhappy:
IllegalArgumentException[Unrecognized method call (doc['location'].geohashDistanceInKm).]
Yes, location is a geo_point type:
"location": {
"type": "geo_point"
},
Would appreciate confirmation of this. For now I'll code a client-side workaround.

Resources