Golang kubernetes go-client cast Deployment to DeploymentList - go

I am creating a program that gets a list of all deployments from Kubernetes as a *v1.DeploymentList. I managed to do that and it works. Then I do some processing of this list and execute many actions afterwards. Now, I have a new requirement; need to also be able to pull just ONE deployment and apply the same logic to it. The problem is when I use get the deployment what I get is *v1.Deployment which of course is different from *v1.DeploymentList as this is a list. Now, this DeploymentList is not a slice, so I can NOT just use append and do not know how to convert/cast. As a "pragmatic" solution, what I am trying to do it to just convert that Deployment into DeploymentList and then apply the rest of my logic as just a deployment as changing everything else would imply a lot of burden at this point.
I have the following code:
func listK8sDeployments(the_clientset *kubernetes.Clientset, mirrorDeploy *string) *v1.DeploymentList {
if mirrorDeploy != nil {
tmp_deployments, err := the_clientset.AppsV1().Deployments(apiv1.NamespaceDefault).Get(context.TODO(), *mirrorDeploy, metav1.GetOptions{})
if err != nil {
panic(err.Error())
}
// Here would need to convert the *v1.Deployment into *v1.DeploymentList a list to retun it according to my EXISTING logic. If I can do this, I do not need to change anything else on the program.
// return the Deployment list with one single deployment inside and finish.
}
deployments_list, err := the_clientset.AppsV1().Deployments(apiv1.NamespaceDefault).List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
return deployments_list
}
It returns a *v1.Deployment, but I need this data as a list even if it *v1.DeploymentList I have tried to append, but the *v1.DeploymentList is not a slice, so I can not do it. Any ideas as to how to achieve this or should I change the way things are done? Please explain. FYI: I am new to Go and to programming k8s related things too.

when you look at the definition of v1.DeploymentList you can see where the Deployment is located:
// DeploymentList is a list of Deployments.
type DeploymentList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Items is the list of Deployments.
Items []Deployment `json:"items" protobuf:"bytes,2,rep,name=items"`
}
then you can easily create a new instance of it with your value:
func listK8sDeployments(the_clientset *kubernetes.Clientset, mirrorDeploy *string) *v1.DeploymentList {
if *mirrorDeploy != "" {
tmp_deployments, err := the_clientset.AppsV1().Deployments(apiv1.NamespaceDefault).Get(context.TODO(), *mirrorDeploy, metav1.GetOptions{})
if err != nil {
panic(err.Error())
}
// create a new list with your deployment and return it
deployments_list := v1.DeploymentList{Items: []v1.Deployment{*tmp_deployments}}
return &deployments_list
}
deployments_list, err := the_clientset.AppsV1().Deployments(apiv1.NamespaceDefault).List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
return deployments_list
}

Related

Can I connect to Memgraph using Go?

I'd like to connect from Go to the running instance of the Memgraph database. I'm using Docker and I've installed the Memgraph Platform. What exactly do I need to do?
The procedure for connecting fro Go to Memgraph is rather simple. For this you need to use Bolt protocol. Here are the needed steps:
First, create a new directory for your app, /MyApp, and position yourself in it. Next, create a program.go file with the following code:
package main
import (
"fmt"
"github.com/neo4j/neo4j-go-driver/v4/neo4j"
)
func main() {
dbUri := "bolt://localhost:7687"
driver, err := neo4j.NewDriver(dbUri, neo4j.BasicAuth("username", "password", ""))
if err != nil {
panic(err)
}
// Handle driver lifetime based on your application lifetime requirements driver's lifetime is usually
// bound by the application lifetime, which usually implies one driver instance per application
defer driver.Close()
item, err := insertItem(driver)
if err != nil {
panic(err)
}
fmt.Printf("%v\n", item.Message)
}
func insertItem(driver neo4j.Driver) (*Item, error) {
// Sessions are short-lived, cheap to create and NOT thread safe. Typically create one or more sessions
// per request in your web application. Make sure to call Close on the session when done.
// For multi-database support, set sessionConfig.DatabaseName to requested database
// Session config will default to write mode, if only reads are to be used configure session for
// read mode.
session := driver.NewSession(neo4j.SessionConfig{})
defer session.Close()
result, err := session.WriteTransaction(createItemFn)
if err != nil {
return nil, err
}
return result.(*Item), nil
}
func createItemFn(tx neo4j.Transaction) (interface{}, error) {
records, err := tx.Run(
"CREATE (a:Greeting) SET a.message = $message RETURN 'Node ' + id(a) + ': ' + a.message",
map[string]interface{}{"message": "Hello, World!"})
// In face of driver native errors, make sure to return them directly.
// Depending on the error, the driver may try to execute the function again.
if err != nil {
return nil, err
}
record, err := records.Single()
if err != nil {
return nil, err
}
// You can also retrieve values by name, with e.g. `id, found := record.Get("n.id")`
return &Item{
Message: record.Values[0].(string),
}, nil
}
type Item struct {
Message string
}
Now, create a go.mod file using the go mod init example.com/hello command.
I've mentioned the Bolt driver earlier. You need to add it with go get github.com/neo4j/neo4j-go-driver/v4#v4.3.1. You can run your program with go run .\program.go.
The complete documentation is located at Memgraph site.

How can I reach struct member in interface type

I have to keep multi type struct in slice and seed them. I took with variadic parameter of interface type and foreach them. If I call the method of interface it works, but when I trying to reach to struct I can't. How can I solve that?
Note: Seed() method return the file name of datas.
The Interface:
type Seeder interface {
Seed() string
}
Method:
func (AirportCodes) Seed() string {
return "airport_codes.json"
}
SeederSlice:
seederModelList = []globals.Seeder{
m.AirportCodes{},
m.Term{},
}
And the last one, SeedSchema function:
func (db *Database) SeedSchema(models ...globals.Seeder) error {
var (
subjects []globals.Seeder
fileByte []byte
err error
// tempMember map[string]interface{}
)
if len(models) == 0 {
subjects = seederModelList
} else {
subjects = models
}
for _, model := range subjects {
fileName := model.Seed()
fmt.Printf("%+v\n", model)
if fileByte, err = os.ReadFile("db/seeds/" + fileName); err != nil {
fmt.Println("asd", err)
// return err
}
if err = json.Unmarshal(fileByte, &model); err != nil {
fmt.Println("dsa", err)
// return err
}
modelType := reflect.TypeOf(model).Elem()
modelPtr2 := reflect.New(modelType)
fmt.Printf("%s\n", modelPtr2)
}
return nil
}
I can reach exact model but can't create a member and seed.
After some back and forth in the comments, I'll just post this minimal answer here. It's by no means a definitive "this is what you do" type answer, but I hope this can at least provide you with enough information to get you started. To get to this point, I've made a couple of assumptions based on the snippets of code you've provided, and I'm assuming you want to seed the DB through a command of sorts (e.g. your_bin seed). That means the following assumptions have been made:
The Schemas and corresponding models/types are present (like AirportCodes and the like)
Each type has its own source file (name comes from Seed() method, returning a .json file name)
Seed data is, therefore, assumed to be in a format like [{"seed": "data"}, {"more": "data"}].
The seed files can be appended, and should the schema change, the data in the seed files could be changed all together. This is of less importance ATM, but still, it's an assumption that should be noted.
OK, so let's start by moving all of the JSON files in a predictable location. In a sizeable, real world application you'd use something like XDG base path, but for the sake of brevity, let's assume you're running this in a scratch container from / and all relevant assets have been copied in to said container.
It'd make sense to have all seed files in the base path under a seed_data directory. Each file contains the seed data for a specific table, and therefore all the data within a file maps neatly onto a single model. Let's ignore relational data for the time being. We'll just assume that, for now, the data in these files is at least internally consistent, and any X-to-X relational data will have to right ID fields allowing for JOIN's and the like.
Let's start
So we have our models, and the data in JSON files. Now we can just create a slice of said models, making sure that data that you want/need to be present before other data is inserted is represented as a higher entry (lower index) than the other. Kind of like this:
seederModelList = []globals.Seeder{
m.AirportCodes{}, // seeds before Term
m.Term{}, // seeds after AirportCodes
}
But instead or returning the file name from this Seed method, why not pass in the connection and have the model handle its own data like this:
func (_ AirportCodes) Seed(db *gorm.DB) error {
// we know what file this model uses
data, err := os.ReadFile("seed_data/airport_codes.json")
if err != nil {
return err
}
// we have the data, we can unmarshal it as AirportCode instances
codes := []*AirportCodes{}
if err := json.Unmarshal(data, &codes); err != nil {
return err
}
// now INSERT, UPDATE, or UPSERT:
db.Clauses(clause.OnConflict{
UpdateAll: true,
}).Create(&codes)
}
Do the same for other models, like Terms:
func (_ Terms) Seed(db *gorm.DB) error {
// we know what file this model uses
data, err := os.ReadFile("seed_data/terms.json")
if err != nil {
return err
}
// we have the data, we can unmarshal it as Terms instances
terms := []*Terms{}
if err := json.Unmarshal(data, &terms); err != nil {
return err
}
// now INSERT, UPDATE, or UPSERT:
return db.Clauses(clause.OnConflict{
UpdateAll: true,
}).Create(&terms)
}
Of course, this does result in a bit of a mess considering we have DB access in a model, which should really be just a DTO if you ask me. This also leaves a lot to be desired in terms of error handling, but the basic gist of it would be this:
func main() {
db, _ := gorm.Open(mysql.Open(dsn), &gorm.Config{}) // omitted error handling for brevity
seeds := []interface{
Seed(*gorm.DB) error
}{
model.AirportCodes{},
model.Terms{},
// etc...
}
for _, m := range seeds {
if err := m.Seed(db); err != nil {
panic(err)
}
}
db.Close()
}
OK, so this should get us started, but let's just move this all into something a bit nicer by:
Moving the whole DB interaction out of the DTO/model
Wrap things into a transaction, so we can roll back on error
Update the initial slice a bit to make things cleaner
So as mentioned earlier, I'm assuming you have something like repositories to handle DB interactions in a separate package. Rather than calling Seed on the model, and passing the DB connection into those, we should instead rely on our repositories:
db, _ := gorm.Open() // same as before
acs := repo.NewAirportCodes(db) // pass in connection
tms := repo.NewTerms(db) // again...
Now our model can still return the JSON file name, or we can have that as a const in the repos. At this point, it doesn't really matter. The main thing is, we can have the actual inserting of data done in the repositories.
You can, if you want, change your seed slice thing to something like this:
calls := []func() error{
acs.Seed, // assuming your repo has a Seed function that does what it's supposed to do
tms.Seed,
}
Then perform all the seeding in a loop:
for _, c := range calls {
if err := c(); err != nil {
panic(err)
}
}
Now, this just leaves us with the issue of the transaction stuff. Thankfully, gorm makes this really rather simple:
db, _ := gorm.Open()
db.Transaction(func(tx *gorm.DB) error {
acs := repo.NewAirportCodes(tx) // create repo's, but use TX for connection
if err := acs.Seed(); err != nil {
return err // returning an error will automatically rollback the transaction
}
tms := repo.NewTerms(tx)
if err := tms.Seed(); err != nil {
return err
}
return nil // commit transaction
})
There's a lot more you can fiddle with here like creating batches of related data that can be committed separately, you can add more precise error handling and more informative logging, handle conflicts better (distinguish between CREATE and UPDATE etc...). Above all else, though, something worth keeping in mind:
Gorm has a migration system
I have to confess that I've not dealt with gorm in quite some time, but IIRC, you can have the tables be auto-migrated if the model changes, and run either custom go code and or SQL files on startup which can be used, rather easily, to seed the data. Might be worth looking at the feasibility of that...

How to get the latest change time of StatefulSet in k8s

I know, for example, that you can get the lastUpdateTime of a Deployment with kubectl:
kubectl get deploy <deployment-name> -o jsonpath={.status.conditions[1].lastUpdateTime}
Or via client-go:
func deploymentCheck(namespace string, clientset *kubernetes.Clientset) bool {
// get the deployments in the namespace
deployments, err := clientset.AppsV1().Deployments(namespace).List(context.TODO(), metav1.ListOptions{})
if errors.IsNotFound(err) {
log.Fatal("\nNo deployments in the namespace", err)
} else if err != nil {
log.Fatal("\nFailed to fetch deployments in the namespace", err)
}
var dptNames []string
for _, dpt := range deployments.Items {
dptNames = append(dptNames, dpt.Name)
}
// check the last update time of the deployments
for _, dpt := range deployments.Items {
lastUpdateTime := dpt.Status.Conditions[1].LastUpdateTime
dptAge := time.Since(lastUpdateTime.Time)
fmt.Printf("\nDeployment %v age: %v", dpt.Name, dptAge)
}
}
The equivalent of lastUpdateTime := dpt.Status.Conditions[1].LastUpdateTime for a StatefulSet doesn't seem to exist.
So, how can I get the lastUpdateTime of a StatefulSet?
I noticed that the only things that change after someone edits a given resource are the resource's lastAppliedConfiguration, Generation and ObservedGeneration. So, I stored them in lists:
for _, deployment := range deployments.Items {
deploymentNames = append(deploymentNames, deployment.Name)
lastAppliedConfig := deployment.GetAnnotations()["kubectl.kubernetes.io/last-applied-configuration"]
lastAppliedConfigs = append(lastAppliedConfigs, lastAppliedConfig)
generations = append(generations, deployment.Generation)
observedGenerations = append(observedGenerations, deployment.Status.ObservedGeneration)
}
Here's the full function:
func DeploymentCheck(namespace string, clientset *kubernetes.Clientset) ([]string, []string, []int64, []int64) {
var deploymentNames []string
var lastAppliedConfigs []string
var generations []int64
var observedGenerations []int64
deployments, err := clientset.AppsV1().Deployments(namespace).List(context.TODO(), metav1.ListOptions{})
if errors.IsNotFound(err) {
log.Print("No deployments in the namespace", err)
} else if err != nil {
log.Print("Failed to fetch deployments in the namespace", err)
}
for _, deployment := range deployments.Items {
deploymentNames = append(deploymentNames, deployment.Name)
lastAppliedConfig := deployment.GetAnnotations()["kubectl.kubernetes.io/last-applied-configuration"]
lastAppliedConfigs = append(lastAppliedConfigs, lastAppliedConfig)
generations = append(generations, deployment.Generation)
observedGenerations = append(observedGenerations, deployment.Status.ObservedGeneration)
}
return deploymentNames, lastAppliedConfigs, generations, observedGenerations
}
I use all this information to instantiate a struct called Namespace, which contains all major resources a k8s namespace can have.
Then, after a given time I check the same namespace again and check if its resources had any changes:
if !reflect.DeepEqual(namespace.stsLastAppliedConfig, namespaceCopy.stsLastAppliedConfig) {
...
}
else if !reflect.DeepEqual(namespace.stsGeneration, namespaceCopy.stsGeneration) {
...
}
else if !reflect.DeepEqual(namespace.stsObservedGeneration, namespaceCopy.stsObservedGeneration) {
...
}
So, the only workaround I found was to compare the resource's configuration, including StatefulSets', after a given time. Apparently, for some resources you cannot get any information about their lastUpdateTime.
I also found out that lastUpdateTime is actually not reliable, as it understands minor cluster changes as the resource's change. For example, if a cluster rotates and kills all pods, the lastUpdateTime of a Deployment will update its time. That's not what I wanted. I wanted to detect user changes to resources, like when someone applies an edited yaml file or run kubectl edit.
#hypperster , I hope it helps.

Is there an API to retrieve the neo4j relationship details using the Go bolt driver?

Using the neo4j Go bolt driver, I am able to get nodes, but not relationships from the graph db.
The Run() API in neo4j.transaction return type is Result which can give nodes, but not to the relationships?
If I try the query in the neo4j browser, it shows me the properties of the relationships, but if I send the same query programmatically, I don’t get anything. Am I missing something?
MATCH (:a {name: ‘foo’})-[r:bar]->() RETURN properties(r)
The above query works
{
"X":"20",
"Y":"40"
}
But the same query sent via the driver returns no error, but has nothing in it.
There might be something wrong with your mapping code.
It should look like this:
func readCoordinates(driver neo4j.Driver) ([]Coordinates, error) {
session := driver.NewSession(neo4j.SessionConfig{})
defer session.Close()
result, err := session.ReadTransaction(executeReadCoordinates)
if err != nil {
return nil, err
}
return result.([]Coordinates), nil
}
func executeReadCoordinates(tx neo4j.Transaction) (interface{}, error) {
records, err := tx.Run("MATCH (:A {name: 'foo'})-[r:BAR]->() RETURN properties(r)", map[string]interface{}{})
if err != nil {
return nil, err
}
var results []Coordinates
for records.Next() {
record := records.Record()
if props, found := record.Get("properties(r)"); !found {
return nil, fmt.Errorf("expected properties not found")
} else {
properties := props.(map[string]interface{})
coordinates := Coordinates{
X: properties["x"].(int64),
Y: properties["y"].(int64),
}
results = append(results, coordinates)
}
}
return results, nil
}
I changed the case of the node label (convention: PascalCase), relationship type (convention: SCREAMING_SNAKE_CASE) and properties (convention: snake_case).
The code assumes those properties are of type int64 and fetches a list.
If you want a single pair of coordinates, then remove the for loop and use records.Single() instead.

Golang repostiory pattern

I try to implement repository pattern in Go app (simple web service) and try to find better way to escape code duplication.
Here is a code
Interfaces are:
type IRoleRepository interface {
GetAll() ([]Role, error)
}
type ISaleChannelRepository interface {
GetAll() ([]SaleChannel, error)
}
And implementation:
func (r *RoleRepository) GetAll() ([]Role, error) {
var result []Role
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT Id,Name FROM Position")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(Role)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
func (r *SaleChannelRepository) GetAll() ([]SaleChannel, error) {
var result []SaleChannel
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT DISTINCT SaleChannel 'Name' FROM Employee")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(SaleChannel)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
As you can see differences are in a few words. I try to find something like Generics from C#, but didnt find.
Can anyone help me?
No, Go does not have generics and won't have them in the forseeable future¹.
You have three options:
Refactor your code so that you have a single function which accepts an SQL statement and another function, and:
Queries the DB with the provided statement.
Iterates over the result's rows.
For each row, calls the provided function whose task is to
scan the row.
In this case, you'll have a single generic "querying" function,
and the differences will be solely in "scanning" functions.
Several variations on this are possible but I suspect you have the idea.
Use the sqlx package which basically is to SQL-driven databases what encoding/json is to JSON data streams: it uses reflection on your types to create and execute SQL to populate them.
This way you'll get reusability on another level: you simply won't write boilerplate code.
Use code generation which is the Go-native way of having "code templates" (that's what generics are about).
This way, you (usually) write a Go program which takes some input (in whatever format you wish), reads it and writes out one or more files which contain Go code, which is then compiled.
In your, very simple, case, you can start with a template of your Go function and some sort of a table which maps SQL statement to the types to create from the data selected.
I'd note that your code indeed looks woefully unidiomatic.
No one in their right mind implements "repository patterns" in Go, but that's sort of okay so long it keeps you happy—we all are indoctrinated to a certain degree with the languages/environments we're accustomed to,—but your connection := r.provider.GetConnection() looks alarming: the Go's database/sql is drastically different from "popular" environments and frameworks so I'd highly recommend to start with this and this.
¹ (Update as of 2021-05-31) Go will have generics as the proposal to implement them has been accepted and the work implementing them is in progress.
Forgive me if I'm misunderstanding, but a better pattern might be something like the following:
type RepositoryItem interface {
Name() string // for example
}
type Repository interface {
GetAll() ([]RepositoryItem, error)
}
At the moment, you essentially have multiple interfaces for each type of repository, so unless you're going to implement multiple types of RoleRepository, you might as well not have the interface.
Having generic Repository and RepositoryItem interfaces might make your code more extensible (not to mention easier to test) in the long run.
A contrived example might be (if we assume a Repository vaguely correlates to a backend) implementations such as MySQLRepository and MongoDBRepository. By abstracting the functionality of the repository, you're protecting against future mutations.
I would very much advise seeing #kostix's answer also, though.
interface{} is the "generic type" in Go. I can imagine doing something like this:
package main
import "fmt"
type IRole struct {
RoleId uint
}
type ISaleChannel struct {
Profitable bool
}
type GenericRepo interface{
GetAll([]interface{})
}
// conceptual repo to store all Roles and SaleChannels
type Repo struct {
IRoles []IRole
ISaleChannels []ISaleChannel
}
func (r *Repo) GetAll(ifs []interface{}) {
// database implementation here before type switch
for _, v := range ifs {
switch v := v.(type) {
default:
fmt.Printf("unexpected type %T\n", v)
case IRole:
fmt.Printf("Role %t\n", v)
r.IRoles = append(r.IRoles, v)
case ISaleChannel:
fmt.Printf("SaleChannel %d\n", v)
r.ISaleChannels = append(r.ISaleChannels, v)
}
}
}
func main() {
getter := new(Repo)
// mock slice
data := []interface{}{
IRole{1},
IRole{2},
IRole{3},
ISaleChannel{true},
ISaleChannel{false},
IRole{4},
}
getter.GetAll(data)
fmt.Println("IRoles: ", getter.IRoles)
fmt.Println("ISaleChannels: ", getter.ISales)
}
This way you don't have to end up with two structs and/or interfaces for IRole and ISale

Resources