how to do open api format yaml validation, without using clusters? - go

I have build a schema in open api format:
type Test_manifest struct {
metav1.TypeMeta
metav1.ObjectMeta
Spec spec
}
type spec struct {
Policies []string
Resources resources
Results []results
Variables variables
}
This is not complete schema, just a part of it.
And here below is the actual yaml file:
apiVersion: cli.kyverno.io/v1beta1
kind: kyvernotest
metadata:
name: test-check-nvidia-gpus
labels:
foolabel: foovalue
annotations:
fookey: foovalue
I'm trying to validate this incoming yaml file from the user, I can convert this yaml to json and then validate the value of the fields, but I'm not getting how to validate the field itself, I mean if user write name1 rather than name then how to show error on it. Basically how to validate the key.
Here's what I've implemented for value validation:
test := "cmd/cli/kubectl-kyverno/test/test.yaml"
yamlFile, err := ioutil.ReadFile(test)
if err != nil {
fmt.Printf("Error: failed to read file %v", err)
}
policyBytes, err1 := yaml.ToJSON(yamlFile)
if err1 != nil {
fmt.Printf("failed to convert to JSON")
}
tests := &kyvernov1.Test_manifest{}
if err := json.Unmarshal(policyBytes, tests); err != nil {
fmt.Printf("failed to decode yaml")
}
if tests.TypeMeta.APIVersion == "" {
fmt.Printf("skipping file as tests.TypeMeta.APIVersion not found")
}
if tests.TypeMeta.Kind == "" {
fmt.Printf("skipping file as tests.TypeMeta.Kind not found")
} else if tests.TypeMeta.Kind != "KyvernoTest" {
fmt.Printf("skipping file as tests.TypeMeta.Kind is not `KyvernoTest`")
}
Also we want this valiadtion to happen outside the cluster.

Two things come to my mind:
I notice that you are trying to build a k8s API extension manually which is a lot of rework, I will suggest you use a framework which will handle this for you. This is the recommended best practice that is used very frequently. It is too complicated to be done manually. Here are some resources. (kube-builder, operator-sdk). These solution are Open-API based as well. They will let you define your schema in a simple template, and generate all the validation and API + controller code for you.
If you want more validation and sanity testing, typically this is done with the help of an admission controller in your cluster. It intercepts the incoming request, and before it is processed by the API server, performs actions on it. (For verification, compliance, policy enforcement, authentication etc.) You can read more about admission controllers here.

Related

Is there a way to add collaborators to a Google Sheet using the Golang Client Library or REST API?

I am able to create a new spreadsheet with the gsheets client library, and my next step is to add editors to that newly created sheet so that the sheet can be accessed by the users of the application
Here is the code for creating the sheet:
ctx := context.Background()
srv, err := gsheets.NewService(ctx)
if err != nil {
log.Printf("Unable to retrieve Sheets client: %v", err)
}
sp := &gsheets.Spreadsheet{
Properties: &gsheets.SpreadsheetProperties{
Title: groupName,
},
}
spreadsheet, err := srv.Spreadsheets.Create(sp).Do()
if err != nil {
return nil, err
}
I have searched through the golang client library docs and the REST API docs, and I am unable to find anything related to adding collaborators
I was expecting there to be some request object that allows me to add a collaborator using their email and role:
req := gsheets.Request{
AddCollaborator: &gsheets.AddCollaboratorRequest{
Email: "person#gmail.com",
Role: "editor",
},
}
busr := &gsheets.BatchUpdateSpreadsheetRequest{
Requests: []*gsheets.Request{&req},
}
res, err := srv.Spreadsheets.BatchUpdate(spreadsheetId, busr).Do()
or at the very least I was expecting there to be an API endpoint where I can achieve the same result
I am also curious if there is a way to create this new spreadsheet as read only to the public? This would at least allow me to continue developing
It is possible to add editors with the google.golang.org/api/sheets/v4 library.
You can simply create a spreadsheet with:
func (r *SpreadsheetsService) Create(spreadsheet *Spreadsheet) *SpreadsheetsCreateCall
and add editors with Editor type:
type Editors struct {
...
// Users: The email addresses of users with edit access to the protected
// range.
Users []string `json:"users,omitempty"`
...
}
Check library docs for more details.

Continuously Watch Pods in Kubernetes using golang SDK

I want to watch changes to Pods continuously using the client-go Kubernetes SDK. I am using the below code to watch the changes:
func (c *Client) watchPods(namespace string, restartLimit int) {
fmt.Println("Watch Kubernetes Pods")
watcher, err := c.Clientset.CoreV1().Pods(namespace).Watch(context.Background(),
metav1.ListOptions{
FieldSelector: "",
})
if err != nil {
fmt.Printf("error create pod watcher: %v\n", err)
return
}
for event := range watcher.ResultChan() {
pod, ok := event.Object.(*corev1.Pod)
if !ok || !checkValidPod(pod) {
continue
}
owner := getOwnerReference(pod)
for _, c := range pod.Status.ContainerStatuses {
if reflect.ValueOf(c.RestartCount).Int() >= int64(restartLimit) {
if c.State.Waiting != nil && c.State.Waiting.Reason == "CrashLoopBackOff" {
doSomething()
}
if c.State.Terminated != nil {
doSomethingElse()
}
}
}
}
}
The code is watching changes to the Pods, but it exits after some time. I want to run this continuously. I also want to know how much load it puts on the API Server and what is the best way to run a control loop for looking for changes.
In Watch, a long poll connection is established with the API server. Upon establishing a connection, the API server sends an initial batch of events and any subsequent changes. The connection will be dropped after a timeout occurs.
I would suggest using an Informer instead of setting up a watch, as it is much more optimized and easier to setup. While creating an informer, you can register specific functions which will be invoked when pods get created, updated and deleted. Even in informers you can target specific pods using a labelSelector, similar to watch. You can also create shared informers, which are shared across multiple controllers in the cluster. This results in reducing the load on the API server.
Below are few links to get you started:
https://aly.arriqaaq.com/kubernetes-informers/
https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/
https://pkg.go.dev/k8s.io/client-go/informers

Handling DB connections and Env config in DDD (clean/hexagonal) architecture

While I grasped the general idea, I'm having trouble seeing the best practice for managing config envs and managing the DB connection.
Meaning:
If I have the repository (for PostgreSQL for example), should I pass the NewRepository function the DB configuration? Will it not somehow adversely affect the architecture principles (maintenance, testability, etc.)?
How do we handle things like defer db.Close()?
I mean, we'd obviously want it to defer in relation to the scope main function, so it's problematic to move that code into the Repository "class" (unless there's a way to do that with Context?)
On the other hand, calling NewRepository in main scope but then having the db handle the connection outside of it feels kind of strange.
Most of the examples I've found used the main function so it was easy. The question is how do you that correctly when employing the DDD (clean/hexagonal) architecture ? especially so that all the pieces would be "pluggable" without having to change the code "around it".
Here is an example I put together, is there a violation of some principles of the ddd pattern here? or is it actually how these things are done?
1. Shouldn't I handle the defer db.Close() inside the repository itself? maybe with Context I can defer it in relation to the main function scope but inside the repository itself?
2. Should I really pass the config into the NewRepository ?
pkg/main.go :
func main() {
// get configuration stucts via .env file
configuration, err := config.NewConfig()
if err != nil {
panic(err)
}
postgresRepo, err := postgres.NewRepository(configuration.Database)
defer postgresRepo.DB.Close()
myService := autocomplete.NewService(postgresRepo)
handler := rest.NewHandler(myService)
...
...
...
}
pkg/config/config.go:
// Config is a struct that contains configuration variables
type Config struct {
Environment string
Port string
Database *Database
}
// Database is a struct that contains DB's configuration variables
type Database struct {
Host string
Port string
User string
DB string
Password string
}
// NewConfig creates a new Config struct
func NewConfig() (*Config, error) {
env.CheckDotEnv()
port := env.MustGet("PORT")
// set default PORT if missing
if port == "" {
port = "3000"
}
return &Config{
Environment: env.MustGet("ENV"),
Port: port,
Database: &Database{
Host: env.MustGet("DATABASE_HOST"),
Port: env.MustGet("DATABASE_PORT"),
User: env.MustGet("DATABASE_USER"),
DB: env.MustGet("DATABASE_DB"),
Password: env.MustGet("DATABASE_PASSWORD"),
},
}, nil
}
Instead of passing the database configuration into your repository, try passing the database connection. For example:
func main() {
db, err := sql.Open("postgres", "...")
if err != nil {
log.Fatal(err)
}
defer db.Close()
repo := postgres.NewAutocompleteRepo(db)
svc := autocomplete.NewService(repo)
handler := autocomplete.NewHTTPHandler(svc)
}
This will leave the responsibility of connecting to the database outside of the repository and allow for easier testing.

Golang HTTP uploading file to S3 using tusd only uploading metadata

I am using the tusd library to upload a file directly to S3 in Go. It seems to be functioning however tusd uploads two files a .info metadata file and a .bin actual content file. For some reason my code is only uploading the info file.
The documentation is quite tricky to navigate so perhaps I have missed a setting somewhere
Code as gist to show both the server and the client code.
There are mutiple issues here.
Your tus libary import paths are wrong they should be:
"github.com/tus/tusd/pkg/handler"
"github.com/tus/tusd/pkg/s3store"
You dont use the S3 store propely, you setup a configuration to have storage directly on your server
fStore := filestore.FileStore{
Path: "./uploads",
}
Instead it should be something like this:
// S3 acces configuration
s3Config := &aws.Config{
Region: aws.String(os.Getenv("AWS_REGION")),
Credentials: credentials.NewStaticCredentials(os.Getenv("AWS_ACCESS_KEY_ID"), os.Getenv("AWS_SECRET_ACCESS_KEY"), ""),
DisableSSL: aws.Bool(true),
S3ForcePathStyle: aws.Bool(true),
}
// Setting up the s3 storage
s3Store := s3store.New(os.Getenv("AWS_BUCKET_NAME"), s3.New(session.Must(session.NewSession()), s3Config))
// Creates a new and empty store composer
composer := handler.NewStoreComposer()
// UseIn sets this store as the core data store in the passed composer and adds all possible extension to it.
s3Store.UseIn(composer)
// Setting up handler
handler, err := handler.NewHandler(handler.Config{
BasePath: "/files/",
StoreComposer: composer,
})
if err != nil {
panic(fmt.Errorf("Unable to create handler: %s", err))
}
// Listen and serve
http.Handle("/files/", http.StripPrefix("/files/", handler))
err = http.ListenAndServe(":8080", nil)
if err != nil {
panic(fmt.Errorf("Unable to listen: %s", err))
}
It is possible that your client isnt working proprely also (I didnt test it).
I would recommend you use https://github.com/eventials/go-tus instead of trying to implement the protocol by yourself.

Google Directory API add Custom Schema/Update it to Users per google API (in go)

I am trying to upload a CustomSchema to all Users of a company in GSuite. This Custom Schema contains their Github Usernames, which I extracted with the github API.
The problem is, after running the code, the account in Gsuite is not added.
Relevant code (A connection to GSuite with admin Authentication is established, the map has all user entries. If you still want more code, I can provide you with it - just trying to keep it simple):
for _, u := range allUsers.Users {
if u.CustomSchemas != nil {
log.Printf("%v", string(u.CustomSchemas["User_Names"]))
}else{
u.CustomSchemas = map[string]googleapi.RawMessage{}
}
nameFromGsuite := u.Name.FullName
if githubLogin, ok := gitHubAccs[nameFromGsuite]; ok {
userSchemaForGithub := GithubAcc{GitHub: githubLogin}
jsonRaw, err := json.Marshal(userSchemaForGithub)
if err != nil {
log.Fatalf("Something went wrong logging: %v", err)
}
u.CustomSchemas["User_Names"] = jsonRaw
adminService.Users.Update(u.Id, u)
} else {
log.Printf("User not found for %v\n", nameFromGsuite)
}
}
This is the struct for the json encoding:
type GithubAcc struct {
GitHub string `json:"GitHub"`
}
For anyone stumbling upon this.
Everything in the code snippet is correct. By the way the method is written, I expected that adminService.Users.Update() actually updates the user. Instead, it returns an UserUpdatesCall.
You need to execute that update by calling .Do()
From the API:
Do executes the "directory.users.update" call.
So the solution is to change adminService.Users.Update(u.Id, u)
into adminService.Users.Update(u.Id, u).Do()

Resources