I have a Go struct for which I want to generate an OpenAPI schema automatically. Once I have an OpenAPI definition of that struct I wanna generate JSONSchema of it, so that I can validate the input data that comes and is gonna be parsed into those structs.
The struct looks like the following:
// mySpec: io.myapp.MinimalPod
type MinimalPod struct {
Name string `json:"name"`
// k8s: io.k8s.kubernetes.pkg.api.v1.PodSpec
v1.PodSpec
}
Above struct is clearly an augmentation of what Kubernetes PodSpec is.
Now the approach that I have used is to generate definition part for my struct MinimalPod, the definition for PodSpec will come from upstream OpenAPI spec of Kubernetes. PodSpec has a key io.k8s.kubernetes.pkg.api.v1.PodSpec in the upstream OpenAPI spec, this definition is injected from there in my Properties. Now in my code that parses above struct I have templates of what to do if struct field is string.
If the field has a comment that starts with k8s: ... the next part is Kubernetes object's OpenAPI definition key. In our case the OpenAPI definition key is io.k8s.kubernetes.pkg.api.v1.PodSpec. So I retrieve that field's definition from the upstream OpenAPI definition and embed it into the definition of my struct.
Once I have generated an OpenAPI definition for this struct which is injected in Kubernetes OpenAPI schema's definition with key being io.myapp.MinimalPod. Now I can use the tool openapi2jsonschema to generate JSONSchema out of this one. Which generates a JSONSchema file named MinimalPod.json.
Now jsonschema tool and the file MinimalPod.json can be used for validating input given to my tool parser to see if all fields were given right.
Is this the right approach of doing things, or is there a tool/library and if I feed Go structs to it, it gives me OpenAPI schema? It would be fine if it does not identify where to inject Kubernetes OpenAPI schema from even automatic parsing of Go structs and giving OpenAPI definition would be much appreciated.
Update 1
After following #mehdy 's instructions, this is what I have tried:
I have used this import path github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1 to import the PodSpec definition instead of k8s.io/api/core/v1 and code looks like this:
package foomodel
import "github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1"
// MinimalPod is a minimal pod.
// +k8s:openapi-gen=true
type MinimalPod struct {
Name string `json:"name"`
v1.PodSpec
}
Now when I generate the same with flag -i changed from k8s.io/api/core/v1 to github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1
$ go run example/openapi-gen/main.go -i k8s.io/kube-openapi/example/model,github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1 -h example/foomodel/header.txt -p k8s.io/kube-openapi/example/foomodel
This is what is generated:
$ cat openapi_generated.go
// +build !ignore_autogenerated
/*
======
Some random text
======
*/
// This file was autogenerated by openapi-gen. Do not edit it manually!
package foomodel
import (
spec "github.com/go-openapi/spec"
common "k8s.io/kube-openapi/pkg/common"
)
func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
return map[string]common.OpenAPIDefinition{
"k8s.io/kube-openapi/example/model.Container": {
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Container defines a single application container that you want to run within a pod.",
Properties: map[string]spec.Schema{
"health": {
SchemaProps: spec.SchemaProps{
Description: "One common definitions for 'livenessProbe' and 'readinessProbe' this allows to have only one place to define both probes (if they are the same) Periodic probe of container liveness and readiness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes",
Ref: ref("k8s.io/client-go/pkg/api/v1.Probe"),
},
},
"Container": {
SchemaProps: spec.SchemaProps{
Ref: ref("k8s.io/client-go/pkg/api/v1.Container"),
},
},
},
Required: []string{"Container"},
},
},
Dependencies: []string{
"k8s.io/client-go/pkg/api/v1.Container", "k8s.io/client-go/pkg/api/v1.Probe"},
},
}
}
I get only this much of the configuration generated. While when I switch back to "k8s.io/api/core/v1" I get config code auto generated which is more than 8k lines. What am I missing here?
Here definition of k8s.io/client-go/pkg/api/v1.Container and k8s.io/client-go/pkg/api/v1.Probe is missing while when I use k8s.io/api/core/v1 as import everything is generated.
Note: To generate above steps, please git clone https://github.com/kedgeproject/kedge in GOPATH.
You can use kube-openapi package for this. I am going to add a sample to the repo but I've tested this simple model:
// Car is a simple car model.
// +k8s:openapi-gen=true
type Car struct {
Color string
Capacity int
// +k8s:openapi-gen=false
HiddenFeature string
}
If you assume you created this file in
go run example/openapi-gen/main.go -h example/model/header.txt -i k8s.io/kube-openapi/example/model -p k8s.io/kube-openapi/example/model
(you also need to add a header.txt file). You should see a new file created in example/model folder called openapi_generated.go. This is an intermediate generated file that has your OpenAPI model in it:
func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
return map[string]common.OpenAPIDefinition{
"k8s.io/kube-openapi/example/model.Car": {
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Car is a simple car model.",
Properties: map[string]spec.Schema{
"Color": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"Capacity": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
Format: "int32",
},
},
},
Required: []string{"Color", "Capacity"},
},
},
Dependencies: []string{},
},
}
}
From there you should be able to call the generated method, get the model for your Type and get its Schema.
With some go get magic and changing the command line a little, I was able to generate the model for your model. Here is what you should change in your code:
package model
import "k8s.io/api/core/v1"
// MinimalPod is a minimal pod.
// +k8s:openapi-gen=true
type MinimalPod struct {
Name string `json:"name"`
v1.PodSpec
}
and then change the run command a little to include PodSpec in the generation:
go run example/openapi-gen/main.go -h example/model/header.txt -i k8s.io/kube-openapi/example/model,k8s.io/api/core/v1 -p k8s.io/kube-openapi/example/model
Here is what I got: https://gist.github.com/mbohlool/e399ac2458d12e48cc13081289efc55a
Related
I am trying to follow the documentation on the Nexus-Schema (nexusjs) website for adding scalar types to my GraphQL application.
I have tried adding many of the different implementations to my src/types/Types.ts file using the samples provided in the documentation and the interactive examples. My attempts include:
Without a 3rd party libraries:
const DateScalar = scalarType({
name: 'Date',
asNexusMethod: 'date',
description: 'Date custom scalar type',
parseValue(value) {
return new Date(value)
},
serialize(value) {
return value.getTime()
},
parseLiteral(ast) {
if (ast.kind === Kind.INT) {
return new Date(ast.value)
}
return null
},
})
With graphql-iso-date 3rd party library:
import { GraphQLDate } from 'graphql-iso-date'
export const DateTime = GraphQLDate
With graphql-scalars 3rd party library (as shown in the ghost example):
export const GQLDate = decorateType(GraphQLDate, {
rootTyping: 'Date',
asNexusMethod: 'date',
})
I am using this new scalar type in an object definition like the following:
const SomeObject = objectType({
name: 'SomeObject',
definition(t) {
t.date('createdAt') // t.date() is supposed to be available because of `asNexusMethod`
},
})
In all cases, these types are exported from the types file and imported into the makeSchema's types property.
import * as types from './types/Types'
console.log("Found types", types)
export const apollo = new ApolloServer({
schema: makeSchema({
types,
...
context:()=>(
...
})
})
The console.log statement above does show that consts declared in the types file are in scope:
Found types {
GQLDate: Date,
...
}
If I run the app in development mode, everything boots up and runs fine.
ts-node-dev --transpile-only ./src/app.ts
However, I encounter errors whenever I try to compile the app to deploy to a server
ts-node ./src/app.ts && tsc
Note: This error occurs occurs running just ts-node ./src/app.ts before it gets to tsc
The errors that shown during the build process are the following:
/Users/user/checkouts/project/node_modules/ts-node/src/index.ts:500
return new TSError(diagnosticText, diagnosticCodes)
^
TSError: тип Unable to compile TypeScript:
src/types/SomeObject.ts:11:7 - error TS2339: Property 'date' does not exist on type 'ObjectDefinitionBlock<"SomeObject">'.
11 t.date('createdAt')
Does anyone have any ideas on either:
a) How can I work around this error? While long-term solutions are ideal, temporary solutions would also be appreciated.
b) Any steps I could follow to debug this error? Or ideas on how get additional information to assist with debugging?
Any assistance would be very much welcomed. Thanks!
The issue seems to be resolved when --transpile-only flag is added to the nexus:reflect command.
This means the reflection command gets updated to:
ts-node --transpile-only ./src/app.ts
and the build comand gets updated to:
env-cmd -f ./config/.env ts-node --transpile-only ./src/app.ts --nexusTypegen && tsc
A github issue has also been created which can be reviewed here: https://github.com/graphql-nexus/schema/issues/690
I am attempting to launch a Dagster pipeline run with the GraphQL API. I have Dagit running locally and a working pipeline that I can trigger via the playground.
However, I am now trying to trigger the pipeline via GraphQL Playground, available at /graphql.
I am using the following mutation:
mutation ExecutePipeline(
$repositoryLocationName: String!
$repositoryName: String!
$pipelineName: String!
$runConfigData: RunConfigData!
$mode: String!
)
...and hence am providing the following query params:
{
"repositoryName": "my_repo",
"repositoryLocationName": <???>,
"pipelineName": "my_pipeline",
"mode": "dev",
"runConfigData": {<MY_RUN_CONFIG>}
}
I am not sure what value repositoryLocationName should take? I have tried a few but receive the following error:
{
"data": {
"launchPipelineExecution": {
"__typename": "PipelineNotFoundError"
}
}
}
This is the tutorial I am following.
Short answer:
Each repository lives inside a repository location. Dagster provides a default repository location name if you do not provide one yourself. To find the location name, you can click the repository picker in Dagit, and it'll be next to the repository name:
In this example, the repository name is toys_repository, and the location name is dagster_test.toys.repo
Longer answer:
A workspace (defined with your workspace.yaml) is a collection of repository locations.
There are currently three types of repository locations:
Python file
Python module
gRPC server
Each repository location can have multiple repositories. Once you define the location, Dagster is able to automatically go find all the repositories in that location. In the example above, I defined my workspace to have a single Python module repository location:
load_from:
- python_module: dagster_test.toys.repo
Note that simply specified a module and did not specify a repository location name, so Dagster assigned a default repository location name.
If I wanted to specify a location name, I would do:
load_from:
- python_module:
module_name: dagster_test.toys.repo
location_name: "my_custom_location_name"
Similarly for a python file location:
load_from:
- python_file: repo.py
Or with a custom repository location name:
load_from:
- python_file:
relative_path: repo.py
location_name: "my_custom_location_name"
You can also find out using a GraphQL query. Starting from the example provided in the documentation, you just need to add
repositoryOrigin {
repositoryLocationName
}
resulting in
query PaginatedPipelineRuns {
pipelineRunsOrError {
__typename
... on PipelineRuns {
results {
runId
pipelineName
status
runConfigYaml
repositoryOrigin {
repositoryLocationName
}
stats {
... on PipelineRunStatsSnapshot {
startTime
endTime
stepsFailed
}
}
}
}
}
}
This will return the repository location name for any run that is returned. Trigger the pipeline that you want the location name for in the UI before querying and that run will be your first result.
I'm trying to work with Istio from Go, and are using Kubernetes and Istio go-client code.
The problem I'm having is that I can't specify ObjectMeta or TypeMeta in my Istio-ServiceRole object. I can only specify rules, which are inside the spec.
Below you can see what I got working:
import (
v1alpha1 "istio.io/api/rbac/v1alpha1"
)
func getDefaultServiceRole(app nais.Application) *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
}
}
What I would like to do is have this code work:
func getDefaultServiceRole(app *nais.Application) *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
TypeMeta: metav1.TypeMeta{
Kind: "ServiceRole",
APIVersion: "v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: app.Name,
Namespace: app.Namespace,
},
Spec: v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
},
},
}
Can anyone point me in the right direction?
Ah - this is a pretty painful point: Istio requires Kubernetes CRD wrapper metadata (primarily the name and namespace fields), but those fields are not part of the API objects themselves nor are they represented in the protos. (This is changing with the new MCP API for configuring components - which Galley uses - does encode these fields as protobufs but that doesn't help for your use case.) Instead, you should use the types in istio.io/istio/pilot/pkg/config/kube/crd, which implement the K8s CRD interface.
The easiest way to work with the Istio objects in golang is to use Pilot's libraries, particularly the istio.io/istio/pilot/pkg/model and istio.io/istio/pilot/pkg/config/kube/crd packages, as well as the model.Config struct. You can either pass around the full model.Config (not great because spec has type proto.Message so you need type assertions to extract the data you care about), or pass around the inner object wrap it in a model.Config before you push it. You can use the model.ProtoSchema type to help with conversion to and from YAML and JSON. Pilot only defines ProtoSchema objects for the networking API, the type is public and you can create them for arbitrary types.
So, using your example code I might try something like:
import (
v1alpha1 "istio.io/api/rbac/v1alpha1"
"istio.io/istio/pilot/pkg/model"
)
func getDefaultServiceRole() *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
}
}
func toConfig(app *nais.Application, role *v1alpha1.ServiceRole) model.Config {
return &model.Config{
ConfigMeta: model.ConfigMeta{
Name: app.Name,
Namespace: app.Namespace,
},
Spec: app,
}
}
type Client model.ConfigStore
func (c Client) CreateRoleFor(app nais.Application, role *v1alpha1.ServiceRole) error {
cfg := toConfig(app, role)
_, err := c.Create(cfg)
return err
}
As a more complete example, we built the Istio CloudMap operator in this style. Here's the core of it that pushes config to K8s with Pilot libraries. Here's the incantation to create an instance of model.ConfigStore to use to create objects. Finally, I want to call out explicitly as it's only implicit in the example: when you call Create on the model.ConfigStore, the ConfigStore relies on the metadata in the ProtoSchema objects used to create it. So be sure to initialize the store with ProtoSchema objects for all of the types you'll be working with.
You can achieve the same using just the K8s client libraries and the istio.io/istio/pilot/pkg/config/kube/crd package, but I have not done it firsthand and don't have examples handy.
Istio now supports:
import (
istiov1alpha3 "istio.io/api/networking/v1alpha3"
istiogov1alpha3 "istio.io/client-go/pkg/apis/networking/v1alpha3"
)
VirtualService := istiogov1alpha3.VirtualService{
TypeMeta: metav1.TypeMeta{
Kind: "VirtualService",
APIVersion: "networking.istio.io/v1alpha3",
},
ObjectMeta: metav1.ObjectMeta{
Name: "my-name",
},
Spec: istiov1alpha3.VirtualService{},
}
Where istiov1alpha3.VirtualService{} is an istio object.
I am trying to write a protoc plugin that requires me to use custom options. I defined my custom option as shown in the example (https://developers.google.com/protocol-buffers/docs/proto#customoptions):
import "google/protobuf/descriptor.proto";
extend google.protobuf.MessageOptions {
string my_option = 51234;
}
I use it as follows:
message Hello {
bool greeting = 1;
string name = 2;
int32 number = 3;
option (my_option) = "telephone";
}
However, when I read the parsed request, the options field is empty for the "Hello" message.
I am doing the following to read
data = sys.stdin.read()
request = plugin.CodeGeneratorRequest()
request.ParseFromString(data)
When I print "request," it just gives me this
message_type {
name: "Hello"
field {
name: "greeting"
number: 1
label: LABEL_REQUIRED
type: TYPE_BOOL
json_name: "greeting"
}
field {
name: "name"
number: 2
label: LABEL_REQUIRED
type: TYPE_STRING
json_name: "name"
}
field {
name: "number"
number: 3
label: LABEL_OPTIONAL
type: TYPE_INT32
json_name: "number"
}
options {
}
}
As seen, the options field is empty even though I defined options in my .proto file. Is my syntax incorrect for defining custom options? Or could it be a problem with my version of protoc?
I'm making my protobuf python plugin.
I also got the problem like yours and i have found a solution for that.
Put your custom options to a file my_custom.proto
Use protoc to gen a python file from my_custom.proto => my_custom_pb2.py
In your python plugin code, import my_custom_pb2.py import my_custom_pb2
Turns out you need to have the _pb2.py file imported for the .proto file in which the custom option is defined. For example, it you are parsing a file (using ParseFromString) called example.proto which uses a custom option defined in option.proto, you must import option_pb2.py in the Python file that calls ParseFromString.
I'm relatively new to golang and need some help pointing to the right direction.
I'm trying to declare a new Deployment type.
My imports look like:
import (
"encoding/json"
"fmt"
yaml "gopkg.in/yaml.v2"
"io/ioutil"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/kubernetes/pkg/api/v1"
"k8s.io/kubernetes/pkg/apis/extensions/v1beta1"
)
When I try to create a Deployment Object like:
test := v1beta1.Deployment{
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{{
Name: "test",
Image: "image_url",
},
},
},
},
},
}
It works, but the Deployment Object that returns doesn't have a Kind which is necessary to identify the object.
According to https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/extensions/types.go#L162
There's an embedded metav1.TypeMeta which has the Kind object that I need. (For reference: https://github.com/kubernetes/apimachinery/blob/master/pkg/apis/meta/v1/types.go#L38)
I tried declaring metav1.TypeMeta in the struct literal like:
test := v1beta1.Deployment{
metav1.TypeMeta: metav1.TypeMeta{Kind: "Deployment"}
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{{
Name: "test",
Image: "image_url",
},
},
},
},
},
}
But I get a
unknown field '"k8s.io/apimachinery/pkg/apis/meta/v1".TypeMeta' in struct literal of type v1beta1.Deployment
I suspect it is due to metav1.TypeMeta declaration in the Deployment struct is an unexported field.
How should I declare Kind?
When using an embedded struct, the key is usually the type name without the package. You can declare your TypeMeta like this:
test := v1beta1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1beta1",
Kind: "Deployment",
},
}
However, manually setting the TypeMeta on any Kubernetes API object is usually only necessary if you plan to persist these objects yourself (for example, to generate YAML files).
When using the Kubernetes client API (for example, using the k8s.io/client-go package) to talk to an API server, you will not need the TypeMeta property, since all API operations are strongly typed anyway and all metadata can safely be inferred. After all, the API version and kind of a v1beta1.Deployment struct should be (and are, to the client library) obvious.