Is it possible to reference OpenAPI operation description from an external file?
Here is my sample code. I want to keep the description "This API is used to get user details" in a separate file and use it here like a variable or template or as a reference. Is there any way to do this?
get:
tags:
- User
summary: Get user details
description: This API is used to get user details
operationId: updateUser
parameters:
- name: userid
in: path
description: The id that needs to be pulled
required: true
schema:
type: string
If you use Redocly CLI to bundle, then you can put it in a separate file like this:
get:
tags:
- User
summary: Get user details
description:
$ref: ./updateUser-description.md
operationId: updateUser
parameters:
- name: userid
in: path
description: The id that needs to be pulled
required: true
schema:
type: string
Then, in a separate file named updateUser-description.md (note, you could change the name too):
This API is used to get user details
Then, when you run the bundle command it resolves the $ref and replaces the description with the contents in that corresponding Markdown file.
npx #redocly/cli bundle my-openapi.yaml
Related
I'm trying to put the getBaseDir function from PHP in my OpenAPI YAML file to get the absolute path, but I don't know how to do it.
responses:
'200':
description: successful Operation
content:
application/json:
schema:
$ref: 'var/www/docroot/www/app/code/local/Project/Myproject/OpenAPI/components/schemas/test.yaml'
I have a springboot project in which I have developed an api with OpenApi in yml format and autogenerated the classes with openapi-generator-maven-plugin. The yml is as follows:
openapi: 3.0.2
info:
version: 0.0.1-SNAPSHOT
title: Example API
servers:
- description: Localhost
url: 'http://localhost:{port}/my-first-api'
variables:
port:
default: '8080'
tags:
- name: Example
paths:
/api/v1/examples:
get:
summary: Get examples
operationId: getExamples
description: Obtain a list of available examples.
tags:
- Example
responses:
'200':
description: OK
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/Example'
components:
schemas:
Example:
title: Example
type: object
properties:
description:
type: string
check:
type: boolean
example:
description: 'Example'
check: true
As you can see, I have defined that the local base path is:
http://localhost:8080/my-first-api
And later for the only available endpoint that is added:
/api/v1/examples
Therefore, I expected that once the artifact was started locally, I could consume the endpoint from this URL:
http://localhost:8080/my-first-api/api/v1/examples
But my surprise is that it doesn't work, this URL is not found. But if it finds the following:
http://localhost:8080/api/v1/examples
As you can see, it accesses without the "my-first-api" part of the path, but I need this part of the path to be there too... What could be happening?
Thanks!
In my tests, it worked just fine. The my-path part got changed, matching the spec changes.
#RequestMapping("${project.name.base-path:/my-path}")
But as you can see, spring would allow you to override this base URL using the project.name.base-path property. (The actual property name is probably different for you)
So, my suggestion would be:
Check if the annotation on the generated Controller changes at all.
If it does, check if the property is overridden at some point.
Check if you are setting spring's own base URL with the property server.servlet.context-path
When executing a workflow there is a unique Execution ID. Is it possible to access this value from within the workflow. For example, if I was to use the Execution ID as the filename in a step:
url: https://storage.googleapis.com/upload/storage/v1/b/bucketname/o
headers:
Content-Type: application/json
query:
uploadType: media
name: ${string(EXECUTION_ID) + ".json"}
```
As of now, it's not possible to get the Workflow execution id.
The only Environment variables that you can access are the following:
GOOGLE_CLOUD_PROJECT_NUMBER: The workflow project's number.
GOOGLE_CLOUD_PROJECT_ID: The workflow project's identifier.
GOOGLE_CLOUD_LOCATION: The workflow's location.
GOOGLE_CLOUD_WORKFLOW_ID: The workflow's identifier.
GOOGLE_CLOUD_WORKFLOW_REVISION_ID: The workflow's revision
identifier.
You can access them within the workflow with sys.get_env(). For example:
- getProjectID:
assign:
- projectID: ${sys.get_env("GOOGLE_CLOUD_PROJECT_ID")}
See source
We're having problem with some field types on our production server. Some field types are missing, causing the admin interface to crash when trying to list all items. The fields we're having problems with are so far Date and CloudinaryImages (note that DateTime and CloudinaryImage works fine).
When inspecting the source on our staging server and comparing to our production server we see the following difference in the compiled js files:
example.com/js/fields.js on staging:
exports.Fields = {
text: require("types/text/TextField"),
textarea: require("types/textarea/TextareaField"),
html: require("types/html/HtmlField"),
cloudinaryimage: require("types/cloudinaryimage/CloudinaryImageField"),
select: require("types/select/SelectField"),
relationship: require("types/relationship/RelationshipField"),
datetime: require("types/datetime/DatetimeField"),
boolean: require("types/boolean/BooleanField"),
embedly: require("types/embedly/EmbedlyField"),
cloudinaryimages: require("types/cloudinaryimages/CloudinaryImagesField"),
numberarray: require("types/numberarray/NumberArrayField"),
code: require("types/code/CodeField"),
number: require("types/number/NumberField"),
textarray: require("types/textarray/TextArrayField"),
url: require("types/url/UrlField"),
file: require("types/file/FileField"),
email: require("types/email/EmailField"),
name: require("types/name/NameField"),
password: require("types/password/PasswordField")
};
example.com/js/fields.js on production:
exports.Fields = {
text: require("types/text/TextField"),
textarea: require("types/textarea/TextareaField"),
html: require("types/html/HtmlField"),
cloudinaryimage: require("types/cloudinaryimage/CloudinaryImageField"),
select: require("types/select/SelectField"),
relationship: require("types/relationship/RelationshipField"),
datetime: require("types/datetime/DatetimeField"),
boolean: require("types/boolean/BooleanField"),
embedly: require("types/embedly/EmbedlyField"),
numberarray: require("types/numberarray/NumberArrayField"),
code: require("types/code/CodeField"),
number: require("types/number/NumberField"),
textarray: require("types/textarray/TextArrayField"),
url: require("types/url/UrlField"),
file: require("types/file/FileField"),
email: require("types/email/EmailField"),
name: require("types/name/NameField"),
password: require("types/password/PasswordField")
};
An eagle eyed reader can see that the staging server has cloudinaryimages: require("types/cloudinaryimages/CloudinaryImagesField"), while the production server does not. Date does not appear at all in these, perhaps because we removed all fields using that type the last time we encountered this problem?
Our site is hosted on Heroku. We've tried disabling node cache and rebuilding. We've tried promoting the staging build to production. The problem still persists. Our production server has environment set to production.
Does the build of the fields.js file depend on what fields we use? And how come our production server doesn't get them?
Any help appreciated.
Keystone version: 4.0.0-beta.8 (forked with a small addition not relevant to this)
I have a bunch of concourse pipeline files that look like the following:
---
resources:
- name: example
type: git
source:
uri: git#github.internal.me.com:me/example.git
branch: {{tracking_branch}}
private_key: {{ssh_key}}
paths:
- code/src/do/teams/sampleapp
params:
depth: 1
- name: deploy-image
type: docker-image
source:
repository: {{docker_image_url}}
And I want to parse them in ruby to perform a bunch of transformations (like validating them and updating some keys if they are missing).
Problem is, whenever I try to load and them dump them back to files the pieces that have {{something}} become:
branch:
? tracking_branch:
:
private_key:
? ssh_key:
:
Why is it doing this and is there any way I can configure the parser not to do this? Just leave these variables as they are?
To avoid conflict with YAML's internal syntax you need to quote your values:
---
resources:
- name: example
type: git
source:
uri: git#github.internal.me.com:me/example.git
branch: '{{tracking_branch}}'
private_key: '{{ssh_key}}'
paths:
- code/src/do/teams/sampleapp
params:
depth: 1
This sort of thing comes up in Ansible configuration files all the time for similar reasons.
The { and } characters are used in Yaml for flow mappings (i.e. hashes). If you don’t provide a value for a mapping entry you get nil.
So in the case of branch: {{tracking_branch}}, since there are two pairs of braces, you get a hash with a key branch and value (in Ruby) of
{{"tracking_branch"=>nil}=>nil}
When this is dumped back out to Yaml you get the somewhat awwkward and verbose:
branch:
? tracking_branch:
:
The solution is simply to quote the value:
branch: "{{tracking_branch}}"
Completely forgot that concourse now offers ((var-name)) for templating, just switched to that instead of {{var-name}} at the pipelines and the YAML parser is now happy!