springdoc-openapi: publish enum as reference when enum comes from generated code - spring

I'm using
<dependency>
<groupId>org.openapitools</groupId>
<artifactId>openapi-generator-maven-plugin</artifactId>
<version>4.3.1</version>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>1.4.8</version>
</dependency>
For generating client-stubs via openapi-specs and generating my own openapi docoumentation.
I do have an API, lets call it just API-1, which I'm using within my project.
This API does provide an enum, simpfified this one:
#Schema(enumAsRef=true)
public enum SomethingEnum {
A,
B,
C
}
API one does provide an openapi-specification, there the enum is included as schema and referenced. That is all fine.
This enum I'm using in API-2. I let all models from API-1 generate with the openapi-generator-maven-plugin.
API-2 does provide an openapi specification, which looks simplified like this:
{
"paths": {
"/request": {
"get": {
"tags": [
"requests"
],
"operationId": "getSomething",
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Something"
}
}
}
}
}
}
}
},
"schemas": {
"Something": {
"type": "object",
"properties": {
"somethingEnum": {
"type": "array",
"items": {
"type": "string",
"enum": [
"A",
"B",
"C"
]
}
},
,
"id": {
"type": "string"
}
}
}
}
}
And here is the problem: The SomethingEnum is not referenced via an Schema.
It should rather look like this:
{
"paths": {
"/request": {
"get": {
"tags": [
"requests"
],
"operationId": "getSomething",
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Something"
}
}
}
}
}
}
}
},
"schemas": {
"Something": {
"type": "object",
"properties": {
"SomethingEnum ": {
"$ref": "#/components/schemas/SomethingEnum "
},
"id": {
"type": "string"
}
}
},
"SomethingEnum": {
"type": "string",
"enum": [
"A",
"B",
"C",
]
}
}
}
How can I achieve this? Is there a way I can either
configure the openapi-generator-maven-plugin to annote generated enums automatically with #Schema(enumAsRef=true)
configure springdoc somehow
?
I hope my problem is clear. Thanks for every suggestion.

Related

Logicapp Expression to read Dynamic Json path - read child element where parent path may change but hierarchy remaining same

Hope all well.
I am in need of creating logicapp expression for reading child element in json where name of element & hierarchy remains same but parent name can be changing.
for example : JSON-1 :
{
"root": {
"abc1": {
"abc2": [
{
"element": "value1",
"element2": "value"
},
{
"element": "value2",
"element2": "valu2"
}
]
}
}
}
JSON-2 :
{
"root": {
"xyz1": {
"xyz2": [
{
"element": "value1",
"element2": "value"
},
{
"element": "value2",
"element2": "valu2"
}
]
}
}
}
I have tried these but no luck
approach-1: #{body('previous-action')?['']?['']?['element']
approach-2: #{body('previous-action')???['element']
Please let me know if anyone encountered this situation. Many thanks in advance.
I tend to find that converting the JSON to xml (at least in your case) is the simplest solution. Then when you've done that, you can't use XPath to simply make your selection.
Flow
In basic terms ...
I've defined a variable of type object that contains your JSON.
I then convert that JSON object to XML using this expression xml(variables('JSON Object'))
Next, initialize a variable is called Elements of type array (given you have multiple of them). The expression for setting that variable is where the smarts come in. That expression is ... xpath(xml(variables('XML')), '//element/text()') and it's getting the inner text of all element nodes in the XML.
Finally, loop through the results.
If you needed to take it up a level and get the second element then you'd need to change your xpath query to be a lot more generic so you can get the element2 nodes (and 3, 4, 5, etc. if they existed) in each array as well.
Note: I've stuck to your specific question of looking for element.
Result
This definition (which can be loaded directly into your tenant) demonstrates the thinking ...
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"For_Each_Element": {
"actions": {
"Set_Element": {
"inputs": {
"name": "Element",
"value": "#{item()}"
},
"runAfter": {},
"type": "SetVariable"
}
},
"foreach": "#variables('Elements')",
"runAfter": {
"Initialize_Element": [
"Succeeded"
]
},
"type": "Foreach"
},
"Initialize_Element": {
"inputs": {
"variables": [
{
"name": "Element",
"type": "string"
}
]
},
"runAfter": {
"Initialize_Elements": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Initialize_Elements": {
"inputs": {
"variables": [
{
"name": "Elements",
"type": "array",
"value": "#xpath(xml(variables('XML')), '//element/text()')"
}
]
},
"runAfter": {
"Initialize_XML": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Initialize_JSON_Object": {
"inputs": {
"variables": [
{
"name": "JSON Object",
"type": "object",
"value": {
"root": {
"abc1": {
"abc2": [
{
"element": "value1",
"element2": "value"
},
{
"element": "value2",
"element2": "valu2"
}
]
}
}
}
}
]
},
"runAfter": {},
"type": "InitializeVariable"
},
"Initialize_XML": {
"inputs": {
"variables": [
{
"name": "XML",
"type": "string",
"value": "#{xml(variables('JSON Object'))}"
}
]
},
"runAfter": {
"Initialize_JSON_Object": [
"Succeeded"
]
},
"type": "InitializeVariable"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {
"ParameterTest1": {
"defaultValue": "\"\"",
"type": "String"
}
},
"triggers": {
"manual": {
"inputs": {
"method": "GET",
"schema": {}
},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}

Any idea how to do custom supportedCookingModes in Alexa discovery?

I'm trying to return a Discovery Response, but the supportedCookingModes only seems to accept standard values and only in the format of ["OFF","BAKE"], not Custom values as indicated by the documentation. Any idea on how to specify custom values?
{
"event": {
"header": {
"namespace": "Alexa.Discovery",
"name": "Discover.Response",
"payloadVersion": "3",
"messageId": "asdf"
},
"payload": {
"endpoints": [
{
"endpointId": "asdf",
"capabilities": [
{
"type": "AlexaInterface",
"interface": "Alexa.Cooking",
"version": "3",
"properties": {
"supported": [
{
"name": "cookingMode"
}
],
"proactivelyReported": true,
"retrievable": true,
"nonControllable": false
},
"configuration": {
"supportsRemoteStart": true,
"supportedCookingModes": [
{
"value": "OFF"
},
{
"value": "BAKE"
},
{
"value": "CUSTOM",
"customName": "FANCY_NANCY_MODE"
}
]
}
}
]
}
]
}
}
}
Custom cooking modes are brand specific. This functionality is not yet publicly available. I recommend you to choose one of the existing cooking modes:
https://developer.amazon.com/en-US/docs/alexa/device-apis/cooking-property-schemas.html#cooking-mode-values

NiFi Convert JSON to CSV via JsonPathReader or JsonTreeReader

I am trying to convert a JSON File into CSV but I don't seem to have any luck in doing so. My JSON looks something like that:
...
{
{"meta": {
"contentType": "Response"
},
"content": {
"data": {
"_type": "ObjectList",
"erpDataObjects": [
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000",
},
"head": {
"fields": {
"number": {
"value": "1",
},
"id": {
"value": "10000"
},
}
}
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000",
},
"head": {
"fields": {
"number": {
"value": "2",
},
"id": {
"value": "10001"
},
}
}
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000",
},
"head": {
.. much more data
I basically want my csv to look like this:
number,id
1,10000
2,10001
My flow looks like this:
GetFile -> Set the output-file name -> ConvertRecord -> UpdateAttribute -> PutFile
ConvertRecord uses the JsonTreeReader and a CSVRecordSetWriter
JsonTreeReader
CsvRecordSetWriter.
They both call on an AvroSchemaRegistry which looks like this:
AvroSchemaRegistry
The AvroSchema itself looks like this:
{
"type": "record",
"name": "head",
"fields":
[
{"name": "number", "type": ["string"]},
{"name": "id", "type": ["string"]},
]
}
But I only get this output:
number,id
,
Which makes sense because I'm not specifically indicating where those values are located. I used the JsonPathReader instead before but it only looked like this:
JsonPathReader
Which obvioulsy only gave me one record. I'm not really sure how I can configure either of the two to output exactly what I want. Help would be much appreciated!
Using ConvertRecord for JSON -> CSV is mostly intended for "flat" JSON files where each field in the object becomes a column in the outgoing CSV file. For nested/complex structures, consider JoltConvertRecord, it allows you to do more complex transformations. Your example doesn't appear to be valid JSON as-is, but assuming you have something like this as input:
{
"meta": {
"contentType": "Response"
},
"content": {
"data": {
"_type": "ObjectList",
"erpDataObjects": [
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000"
},
"head": {
"fields": {
"number": {
"value": "1"
},
"id": {
"value": "10000"
}
}
}
},
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000"
},
"head": {
"fields": {
"number": {
"value": "2"
},
"id": {
"value": "10001"
}
}
}
}
]
}
}
}
The following JOLT spec should give you what you want for output:
[
{
"operation": "shift",
"spec": {
"content": {
"data": {
"erpDataObjects": {
"*": {
"head": {
"fields": {
"number": {
"value": "[&4].number"
},
"id": {
"value": "[&4].id"
}
}
}
}
}
}
}
}
}
]

Array map equivalent for GraphQL query

I was wondering if it is possible to reduce the return payload of this query Result:
{
"nodes": [
{
"topic": {
"name": "typescript"
}
},
{
"topic": {
"name": "discord"
}
},
{
"topic": {
"name": "discord-bot"
}
},
{
"topic": {
"name": "discordjs"
}
},
{
"topic": {
"name": "discordjs-commando"
}
},
{
"topic": {
"name": "mbti-personality"
}
},
{
"topic": {
"name": "mbti"
}
},
{
"topic": {
"name": "typeorm"
}
}
]
}
Into something like this:
{
"nodes": ["typescript", "discord", "discord-bot", "discordjs", "discordjs-commando", "mbti-personality", "mbti", "typeorm"]
}
I find it very verbose and unnecessary
I am not the owner of the API. So that concerns only the query. (It the Github's GraphQL API)
I'm new to GraphQL and don't undestand the principles yet, so I don't know the terms to search for.

How to force object key name in array

I am using YAML to mark up some formulas and using JSON schema to provide a reference schema.
An example of the YAML might be:
formula: # equates to '5 + (3 - 2)'
add:
- 5
- subtract: [3, 2]
While I have figured out how to make the immediate child object of the formula ("add" in this example) have the right key name and type (using a "oneOf"array of "required"s). I am not sure how to ensure that object of an array ("subtract") likewise use specific key names.
So far, I can ensure the type using the following. But with this method, as long as the object used matches the subtract type, it is allowed any key name, it is not restricted to subtract:
"definitions: {
"add": {
"type": "array",
"minItems": 2,
"items": {
"anyOf": [
{ "$ref": "#/definitions/value"}, # value type is an integer which allows for the shown scalar array elements
{ "$ref": "#/definitions/subtract" }
// other operation types
]
}
},
"subtract": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": {
"anyOf": [
{ "$ref": "#/definitions/value"},
{ "$ref": "#/definitions/add" }
// other operation types
]
}
}
// other operation types
}
How can I introduce a restriction such that the keys of objects in the array match specific names, while still also allowing scalar elements?
It sounds like what you want is recursive references.
By creating a new definition which is oneOf the operations and value, which then allow items which then reference back to the new definition, you have recursive references.
"definitions: {
"add": {
"type": "array",
"minItems": 2,
"items": { "$ref": "#/definitions/operations_or_values"},
},
"subtract": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": { "$ref": "#/definitions/operations_or_values"},
}
// other operation types
"operations_or_values": {
"anyOf": [
{ "$ref": "#definitions/add" },
{ "$ref": "#definitions/subtract" },
{ "$ref": "#definitions/value" }, # value type is an integer which allows for the shown scalar array elements
{ "$ref": "#definitions/[OTHERS]" },
]
}
}
I haven't had time to test this, but I believe it will be, or be close to, what you need. Let me know if it doesn't work. I may not have full understood the question.
What a fascinating problem! This remarkably concise schema can express any expression.
{
"type": ["object", "number"],
"propertyNames": { "enum": ["add", "subtract", "multiply", "divide"] },
"patternProperties": {
".*": {
"type": "array",
"minItems": 2,
"items": { "$ref": "#" }
}
}
}
So what I ended up doing was extending the idea of I already used with the '"oneOf"array of "required", adding an "anyOf".
Thus, an operator schema is now:
"definitions": {
"add": {
"type": "array",
"minItems": 2,
"items": {
"anyOf": [
{ "$ref": "#/definitions/literal" }, // equates to a simple YAML scalar
{ "$ref": "#/definitions/constant" },
{ "$ref": "#/definitions/variable" },
{
"type": "object",
"oneOf": [
{ "required": ["add"] },
{ "required": ["subtract"] }
// more operator names
],
"properties": {
"add": { "$ref": "#/definitions/add" },
"subtract": { "$ref": "#/definitions/subtract" }
// more operator type references
}
}
]
}
},
// more definitions
}
This can be refactored to something that applies more easily across different operators like so:
"definitions": {
"operands": {
"literal": { "type": "number" }, // equates to a simple YAML scalar
"constant": {
"type": "object",
"properties": {
"value": { "type": "number" }
},
"required": [ "value" ]
},
"variable": {
"type": "object",
"properties": {
"name": { type": "string" },
"type": { "type": "string" }
},
"required": [ "name", "type" ]
}
}
"operators": {
"add": {
"type": "array",
"minItems": 2,
"items": { "$ref": "#/definitions/anyOperandsOrOperators" }
},
"subtract": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": { "$ref": "#/definitions/anyOperandsOrOperators" }
}
// more operator types
},
"anyOperator": {
"type": "object",
"oneOf": [
{ "required": ["add"] },
{ "required": ["subtract"] }
// more operator names
],
"properties": {
"add": { "$ref": "#/definitions/operators/add" },
"subtract": { "$ref": "#/definitions/operators/subtract" }
// more operator type references
}
},
"anyOperandsOrOperators":
{
"anyOf": [
{ "$ref": "#/definitions/operands/literal" },
{ "$ref": "#/definitions/operands/constant" },
{ "$ref": "#/definitions/operands/variable" },
{ "$ref": "#/definitions/anyOperator"}
]
}
}
And this means the YAML for an operator can look as follows
\/mapping \/mapping
add:[ 5, subtract:[ *constantA, *variableB ] ]
scalar^ ^mapping with specific name

Resources