gqlgen coding style code is tightly coupled to neo4j golang - go

I am just starting to get used to gqlgen to create a golang based graphql api for a personal project I am working on. This is my first attempt at adding a user node into the db, the code works (it took a while, I am new to go neo4j and graphql).
My problem is it feels very coupled to the db, my coding style would be to abstract away the db operations from this code. I don't feel sufficiently experienced to achieve this so I am looking for advice to improve before heading into further programming. I have 25+ years experience of different languages SQL, C++, PHP, Basic, Java, Javascript, Pascal, etc so happy with programming and databases (not so much graph databases).
Code from schema.resolvers.go
// UpsertUser adds or updates a user in the system
func (r *mutationResolver) UpsertUser(ctx context.Context, input model.UserInput) (*model.User, error) {
// Update or insert?
var userId string
if input.ID != nil {
userId = *input.ID
} else {
newUuid, err := uuid.NewV4() // Create a Version 4 UUID.
if err != nil {
return nil, fmt.Errorf("UUID creation error %v", err)
}
userId = newUuid.String()
}
// Open session
session := r.DbDriver.NewSession(neo4j.SessionConfig{AccessMode: neo4j.AccessModeWrite})
defer func(session neo4j.Session) {
err := session.Close()
if err != nil {
}
}(session)
// Start write data to neo4j
neo4jWriteResult, neo4jWriteErr := session.WriteTransaction(
func(transaction neo4j.Transaction) (interface{}, error) {
transactionResult, driverNativeErr :=
transaction.Run(
"MERGE (u:User {uuid: $uuid}) ON CREATE SET u.uuid = $uuid, u.name = $name, u.userType = $userType ON MATCH SET u.uuid = $uuid, u.name = $name, u.userType = $userType RETURN u.uuid, u.name, u.userType",
map[string]interface{}{"uuid": userId, "name": input.Name, "userType": input.UserType})
// Raw driver error
if driverNativeErr != nil {
return nil, driverNativeErr
}
// If result returned
if transactionResult.Next() {
// Return the created nodes data
return &model.User{
ID: transactionResult.Record().Values[0].(string),
Name: transactionResult.Record().Values[1].(string),
UserType: model.UserType(transactionResult.Record().Values[2].(string)),
}, nil
}
// Node wasn't created there was an error return this
return nil, transactionResult.Err()
})
// End write data to neo4j
// write failed
if neo4jWriteErr != nil {
return nil, neo4jWriteErr
}
// write success
return neo4jWriteResult.(*model.User), nil
}
Code from resolver.go
type Resolver struct {
DbDriver neo4j.Driver
}
schema.graphqls
enum UserType {
"Administrator account"
ADMIN
"Tutor account"
TUTOR
"Student account"
STUDENT
"Unvalidated Account"
UNVALIDATED
"Suspended Account"
SUSPENDED
"Retired account"
RETIRED
"Scheduled for deletion"
DELETE
}
type User {
id: ID!
name: String!
userType: UserType!
}
input UserInput {
id: String
name: String!
userType: UserType!
}
type Mutation {
upsertUser(input: UserInput!) : User!
}

Related

Connect to a socket server in golang

I am using this library in golang to connect to socket server.https://github.com/hesh915/go-socket.io-client I have to pass an auth object as server is expecting an object which will authenticate the user, but in this implementation, query seems to take map as a parameter. I get an EOF error while sending the stringified struct to Query as given below:
opts.Query["auth"] = str
I have created a nested structure for that purpose, tried converting struct into byte data using marshal and type casted into string, passed it as key value pair to map but that didn't seem to work out. Here is how my auth nested struct looks like:
{"userInfo":{"id":"","username":"","permissionsInfo":{"permissions"["",""]}},"additionalInfo":""}
How to send this information with the implementation given below:
opts := &socketio_client.Options{
Transport: "websocket",
Query: make(map[string]string),
}
opts.Query["user"] = "user"
opts.Query["pwd"] = "pass"
uri := "http://192.168.1.70:9090/socket.io/"
I have provided the link to the library.
Minimal Reproducible Example
type User struct {
Userinfo UserInfo `json:"userInfo"`
AdditionalInfo string `json:"additionalInfo"`
}
type UserInfo struct {
Id string `json:"id"`
Username string `json:"username"`
Permissionsinfo PermissionsInfo `json:"permissionInfo"`
}
type PermissionsInfo struct {
Permissions []string `json:"permissions"`
}
func main() {
user := &User{
Userinfo: UserInfo{
Id: "",
Username: "",
Permissionsinfo: PermissionsInfo{
Permissions: []string{"", ""},
}},
AdditionalInfo: "",
}
b, _ := json.Marshal(user)
fmt.Println(string(b))
opts := &socketio_client.Options{
Transport: "websocket",
Query: make(map[string]string),
}
opts.Query["auth"] = string(b)
uri := origin + "/socket.io"
client, err := socketio_client.NewClient(uri, opts)
if err != nil {
if err == io.EOF {
fmt.Printf("NewClient error:%v\n", err)
return
}
return
}
client.On("error", func() {
fmt.Println("on error")
})
}
The error to the above code, that i'm getting:
NewClient error EOF
My server end has to receive the auth params as socket.handshake.auth and that's understandable because the above functionality doesn't supports for auth instead it does for query. How can I achieve this functionality? Any help would be appreciated.

BigQuery nullable types in golang when using BigQuery storage write API

I'm switching from the legacy streaming API to the storage write API following this example in golang:
https://github.com/alexflint/bigquery-storage-api-example
In the old code I used bigquery's null types to indicate a field can be null:
type Person struct {
Name bigquery.NullString `bigquery:"name"`
Age bigquery.NullInt64 `bigquery:"age"`
}
var persons = []Person{
{
Name: ToBigqueryNullableString(""), // this will be null in bigquery
Age: ToBigqueryNullableInt64("20"),
},
{
Name: ToBigqueryNullableString("David"),
Age: ToBigqueryNullableInt64("60"),
},
}
func main() {
ctx := context.Background()
bigqueryClient, _ := bigquery.NewClient(ctx, "project-id")
inserter := bigqueryClient.Dataset("dataset-id").Table("table-id").Inserter()
err := inserter.Put(ctx, persons)
if err != nil {
log.Fatal(err)
}
}
func ToBigqueryNullableString(x string) bigquery.NullString {
if x == "" {
return bigquery.NullString{Valid: false}
}
return bigquery.NullString{StringVal: x, Valid: true}
}
func ToBigqueryNullableInt64(x string) bigquery.NullInt64 {
if x == "" {
return bigquery.NullInt64{Valid: false}
}
if s, err := strconv.ParseInt(x, 10, 64); err == nil {
return bigquery.NullInt64{Int64: s, Valid: true}
}
return bigquery.NullInt64{Valid: false}
}
After switching to the new API:
var persons = []*personpb.Row{
{
Name: "",
Age: 20,
},
{
Name: "David",
Age: 60,
},
}
func main() {
ctx := context.Background()
client, _ := storage.NewBigQueryWriteClient(ctx)
defer client.Close()
stream, err := client.AppendRows(ctx)
if err != nil {
log.Fatal("AppendRows: ", err)
}
var row personpb.Row
descriptor, err := adapt.NormalizeDescriptor(row.ProtoReflect().Descriptor())
if err != nil {
log.Fatal("NormalizeDescriptor: ", err)
}
var opts proto.MarshalOptions
var data [][]byte
for _, row := range persons {
buf, err := opts.Marshal(row)
if err != nil {
log.Fatal("protobuf.Marshal: ", err)
}
data = append(data, buf)
}
err = stream.Send(&storagepb.AppendRowsRequest{
WriteStream: fmt.Sprintf("projects/%s/datasets/%s/tables/%s/streams/_default", "project-id", "dataset-id", "table-id"),
Rows: &storagepb.AppendRowsRequest_ProtoRows{
ProtoRows: &storagepb.AppendRowsRequest_ProtoData{
WriterSchema: &storagepb.ProtoSchema{
ProtoDescriptor: descriptor,
},
Rows: &storagepb.ProtoRows{
SerializedRows: data,
},
},
},
})
if err != nil {
log.Fatal("AppendRows.Send: ", err)
}
_, err = stream.Recv()
if err != nil {
log.Fatal("AppendRows.Recv: ", err)
}
}
With the new API I need to define the types in a .proto file, so I need to use something else to define nullable fields, I tried with optional fields:
syntax = "proto3";
package person;
option go_package = "/personpb";
message Row {
optional string name = 1;
int64 age = 2;
}
but it gives me error when trying to stream (not in the compile time):
BqMessage.proto: person_Row.Name: The [proto3_optional=true] option may only be set on proto3fields, not person_Row.Name
Another option I tried is to use oneof, and write the proto file like this
syntax = "proto3";
import "google/protobuf/struct.proto";
package person;
option go_package = "/personpb";
message Row {
NullableString name = 1;
int64 age = 2;
}
message NullableString {
oneof kind {
google.protobuf.NullValue null = 1;
string data = 2;
}
}
Then use it like this:
var persons = []*personpb.Row{
{
Name: &personpb.NullableString{Kind: &personpb.NullableString_Null{
Null: structpb.NullValue_NULL_VALUE,
}},
Age: 20,
},
{
Name: &personpb.NullableString{Kind: &personpb.NullableString_Data{
Data: "David",
}},
Age: 60,
},
}
...
But this gives me the following error:
Invalid proto schema: BqMessage.proto: person_Row.person_NullableString.null: FieldDescriptorProto.oneof_index 0 is out of range for type "person_NullableString".
I guess because the api doesn't know how to handle oneof type, I need to tell it somehow about this.
How can I use something like bigquery.Nullable types when using the new storage API? Any help will be appreciated
Take a look at this sample for an end to end example using a proto2 syntax file in go.
proto3 is still a bit of a special beast when working with the Storage API, for a couple reasons:
The current behavior of the Storage API is to operate using proto2 semantics.
Currently, the Storage API doesn't understand wrapper types, which was the original way in which proto3 was meant to communicate optional presence (e.g. NULL in BigQuery fields). Because of this, it tends to treat wrapper fields as a submessage with a value field (in BigQuery, a STRUCT with a single leaf field).
Later in its evolution, proto3 reintroduced the optional keyword as a way of marking presence, but in the internal representation this meant adding another presence marker (the source of the proto3_optional warning you were observing in the backend error).
It looks like you've using bits of the newer veneer, particularly adapt.NormalizeDescriptor(). I suspect if you're using this, you may be using an older version of the module, as the normalization code was updated in this PR and released as part of bigquery/v1.33.0.
There's work to improve the experiences with the storage API and make the overall experience smoother, but there's still work to be done.

Go gorm check a model is associated with another model

I have a race model
type Race struct {
gorm.Model
Title string
Date string
Token string
Heats []Heat `gorm:"constraint:OnUpdate:CASCADE,OnDelete:CASCADE;"`
Runners []Runner `gorm:"many2many:race_runners;constraint:OnUpdate:CASCADE,OnDelete:SET NULL;"`
}
I want to add runners into heats but before doing so I want to assert that the runner actually is in the race
var race models.Race
if err := models.DB.Preload("Runners").Find(&race, "runners.id IN ?", []uint{runner.ID}).Error; err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "Runner not in race!"})
return
}
I'm getting the following error
near "?": syntax error
Because I'm having a many to many relationship between Race and Runners I doubt that I can just do runners.id in the condition I pass to my Find method. But I'm not sure how to actually do what I want to achieve. Which is ensure the runner is already in the race before adding it to a heat. Any suggestions?
The where query wants to be in the Preload part. Then you can check the len(fetchedRace.Runners) < 1, and if that is true, the runner is not in that race.
fetchedRace := Race{}
db.Preload("Runners", "runners.id = ?", runnerOne.ID).Find(&fetchedRace, race.ID)
Full working example
package main
import (
"fmt"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
type Race struct {
gorm.Model
Title string
Date string
Token string
Runners []Runner `gorm:"many2many:race_runners;constraint:OnUpdate:CASCADE,OnDelete:SET NULL;"`
}
type Runner struct {
gorm.Model
Name string
}
func main() {
db, err := gorm.Open(sqlite.Open("many2many.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
_ = db.AutoMigrate(&Race{}, &Runner{})
raceOne := Race{
Title: "Race One",
}
db.Create(&raceOne)
runnerOne := Runner{
Name: "Runner One",
}
runnerTwo := Runner{
Name: "Runner Two",
}
db.Create(&runnerOne)
db.Create(&runnerTwo)
// Associate runners with race
err = db.Model(&raceOne).Association("Runners").Append([]Runner{runnerOne, runnerTwo})
// Fetch from DB
fetchedRace := Race{}
db.Debug().Preload("Runners", "runners.id = ?", runnerOne.ID).Find(&fetchedRace, raceOne.ID)
if len(fetchedRace.Runners) < 1 {
fmt.Println("error, runner not in race")
}
db.Delete(&raceOne)
db.Delete(&runnerOne)
db.Delete(&runnerTwo)
}
Using a custom SQL statement
This can be done in one SQL statement and gorm isn't very optimal with it's preloading, so if you are planning on using this often I would probably do something like this:
func isRunnerInRace(db *gorm.DB, runnerId uint, raceId uint) bool {
count := 0
db.Raw("SELECT COUNT(runner_id) FROM race_runners WHERE race_id = ? AND runner_id = ?", raceId,
runnerId).Scan(&count)
return count > 0
}

What is the proper way to save a slice of structs into Cloud Datastore (Firestore in Datastore Mode)?

I want to save a slice of structs in Google Cloud Datastore (Firestore in Datastore mode).
Take this Phonebook and Contact for example.
type Contact struct {
Key *datastore.Key `json:"id" datastore:"__key__"`
Email string `json:"email" datastore:",noindex"`
Name string `json:"name" datastore:",noindex"`
}
type Phonebook struct {
Contacts []Contact
Title string
}
Saving and loading this struct is no problem as the Datastore library takes care of it.
Due to the presence of some complex properties in my actual code, I need to implement PropertyLoadSaver methods.
Saving the Title property is straightforward. But I have problems storing the slice of Contact structs.
I tried using the SaveStruct method:
func (pb *Phonebook) Save() ([]datastore.Property, error) {
ps := []datastore.Property{
{
Name: "Title",
Value: pb.Title,
NoIndex: true,
},
}
ctt, err := datastore.SaveStruct(pb.Contacts)
if err != nil {
return nil, err
}
ps = append(ps, datastore.Property{
Name: "Contacts",
Value: ctt,
NoIndex: true,
})
return ps, nil
}
This code compiles but doesn't work.
The error message is datastore: invalid entity type
Making a slice of Property explicitly also does not work:
func (pb *Phonebook) Save() ([]datastore.Property, error) {
ps := []datastore.Property{
{
Name: "Title",
Value: pb.Title,
NoIndex: true,
},
}
cttProps := datastore.Property{
Name: "Contacts",
NoIndex: true,
}
if len(pb.Contacts) > 0 {
props := make([]interface{}, 0, len(pb.Contacts))
for _, contact := range pb.Contacts {
ctt, err := datastore.SaveStruct(contact)
if err != nil {
return nil, err
}
props = append(props, ctt)
}
cttProps.Value = props
}
ps = append(ps, cttProps)
return ps, nil
}
Making a slice of Entity does not work either:
func (pb *Phonebook) Save() ([]datastore.Property, error) {
ps := []datastore.Property{
{
Name: "Title",
Value: pb.Title,
NoIndex: true,
},
}
cttProps := datastore.Property{
Name: "Contacts",
NoIndex: true,
}
if len(pb.Contacts) > 0 {
values := make([]datastore.Entity, len(pb.Contacts))
props := make([]interface{}, 0, len(pb.Contacts))
for _, contact := range pb.Contacts {
ctt, err := datastore.SaveStruct(contact)
if err != nil {
return nil, err
}
values = append(values, datastore.Entity{
Properties: ctt,
})
}
for _, v := range values {
props = append(props, v)
}
cttProps.Value = props
}
ps = append(ps, cttProps)
return ps, nil
}
Both yielded the same error datastore: invalid entity type
Finally I resorted to using JSON. The slice of Contact is converted into a JSON array.
func (pb *Phonebook) Save() ([]datastore.Property, error) {
ps := []datastore.Property{
{
Name: "Title",
Value: pb.Title,
NoIndex: true,
},
}
var values []byte
if len(pb.Contacts) > 0 {
js, err := json.Marshal(pb.Contacts)
if err != nil {
return nil, err
}
values = js
}
ps = append(ps, datastore.Property{
Name: "Contacts",
Value: values,
NoIndex: true,
})
return ps, nil
}
Isn't there a better way of doing this other than using JSON?
I found this document and it mentions src must be a struct pointer.
The only reason you seem to customize the saving of PhoneBook seems to be to avoid saving the Contacts slice if there are no contacts. If so, you can just define your PhoneBook as follows and directly use SaveStruct on the PhoneBook object.
type Phonebook struct {
Contacts []Contact `datastore:"Contacts,noindex,omitempty"`
Title string `datastore:"Title,noindex"`
}

Insert for model with m2m in beego orm

I have two models:
type MainFields struct {
Id int `orm:"auto"`
Created time.Time `orm:"auto_now_add;type(datetime)"`
Updated time.Time `orm:"auto_now;type(datetime)"`
}
type Game struct {
MainFields
Players []*Player `orm:"rel(m2m)"`
}
type Player struct {
MainFields
Games []*Game `orm:"reverse(many)"`
NickName string
}
And with this code i`am trying to create new game with one player:
func insertTestData() {
var playerA models.Player
playerA.NickName = "CoolDude"
id, err := models.ORM.Insert(&playerA)
if err != nil {
log.Printf(err.Error())
} else {
log.Printf("Player ID: %v", id)
}
var game models.Game
game.Players = []*models.Player{&playerA}
id, err = models.ORM.Insert(&game)
if err != nil {
log.Printf(err.Error())
} else {
log.Printf("Game ID: %v", id)
}
}
But it just create two inserts for game and player without rel-connection through "game_players" table which created automatically with orm.RunSyncdb().
2016/09/29 22:19:59 Player ID: 1
[ORM]2016/09/29 22:19:59 -[Queries/default] - [ OK / db.QueryRow / 11.0ms] - [INSERT INTO "player" ("created", "updated", "nick_name") VALUES ($1, $2, $3) RETURNING "id"] - `2016-09-29 22:19:59.8615846 +1000 VLAT`, `2016-09-29 22:19:59.8615846 +1000 VLAT`, `CoolDude`
2016/09/29 22:19:59 Game ID: 1
[ORM]2016/09/29 22:19:59 -[Queries/default] - [ OK / db.QueryRow / 11.0ms] - [INSERT INTO "game" ("created", "updated") VALUES ($1, $2) RETURNING "id"] - `2016-09-29 22:19:59.8725853 +1000 VLAT`, `2016-09-29 22:19:59.8725853 +1000 VLAT`
I can`t find any special rules for working with m2m-models in docs and ask for help to community. How should i insert new row in table?
According to this, you have to make a m2m object, after creating object game, like this:
m2m := models.ORM.QueryM2M(&game, "Players")
And instead of game.Players = []*models.Player{&playerA}, you write:
num, err := m2m.Add(playerA)
So, your function must look like this:
func insertTestData() {
var playerA models.Player
playerA.NickName = "CoolDude"
id, err := models.ORM.Insert(&playerA)
if err != nil {
log.Printf(err.Error())
} else {
log.Printf("Player ID: %v", id)
}
var game models.Game
id, err = models.ORM.Insert(&game)
if err != nil {
log.Printf(err.Error())
} else {
log.Printf("Game ID: %v", id)
}
m2m := o.QueryM2M(&game, "Players")
num, err := m2m.Add(playerA)
if err == nil {
log.Printf("Added nums: %v", num)
}
}
I hope this helps.
P.S.: BTW, you were right, It wasn't necessary to specify the name of the m2m table.

Resources