Google Datastore go client, storing dynamic data - go

Our application provides functionality which enables a customer to create dynamic forms and business rules. We recently decided to explore Google infrastructures so we don't have to spend time tweaking and adjusting our infrastructure.
Thus far, we have managed well using a NOSQL database such as arangodb to store random data sets through their JSON HTTP REST APIs, that stores any sort of data structure, so long as it is a valid JSON. However, Google data store go client library and Datastore doesn;t work with JSON and also imposes rules like no silce []type, no map map[type]type failing with errors such as datastore: invalid Value type e.t.c
I explored option of implementing PropertyLoadSaver interface load/save functions with modifications to create PropertyList and Property to crate a []Property. Case below Collection is type Collection map[string]interface{} which holds an sort of data set
func (m Collection) Save() ([]datastore.Property, error) {
data := []datastore.Property{}
for key, value := range m {
if util.IsSlice(value) {
props := datastore.PropertyList{}
for _, item := range value.([]string) {
props = append(props, datastore.Property{Name: key, Value: item})
}
data = append(data, datastore.Property{Name: key, Value: props})
} else {
data = append(data, datastore.Property{Name: key, Value: value, NoIndex: true})
}
}
json.NewEncoder(os.Stdout).Encode(data)
return data, nil
}
Yes, we can create a struct which we can populate based on map data and save that to Datastore. We were however wondering if there possibly is an easier way to just receive a map and save it to Datastore with no added complexity.
Alternative
type Person struct{
Name string
Surname string
Addresses []Address
...
}
type Address struct{
Type string
Detail string
}
This map[string]interface{}{Name:"Kwasi", Surname:"Gyasi-Agyei", Addresses:...} can than be marshaled into above struct to be saved by Datastore go client lib.
Am however more interested in taking advantage of PropertList, []Property, unless that route is unnecessarily complex. What am basically asking is, which is the most appropriate route that offer the same type of flexibility as a schemaless database.

Related

How can I separate generated code package and user code but have them accessible from one place in code

I am newer to golang, so I have some courses that I bought from udemy to help break me into the language. One of them I found very helpful for a general understanding as I took on a project in the language.
In the class that I took, all of the sql related functions were in the sqlc folder with the structure less broken out:
sqlc
generatedcode
store
One of those files is a querier that is generated by sqlc that contains an interface with all of the methods that were generated. Here is the general idea of what it currently looks like: https://github.com/techschool/simplebank/tree/master/db/sqlc
package db
import (
"context"
"github.com/google/uuid"
)
type Querier interface {
AddAccountBalance(ctx context.Context, arg AddAccountBalanceParams) (Account, error)
CreateAccount(ctx context.Context, arg CreateAccountParams) (Account, error)
...
}
var _ Querier = (*Queries)(nil)
Would it be possible to wrap both what sqlc generates AND any queries that a developer creates (dynamic queries) into a single querier? I'm also trying to have it so that the sqlc generated code is in its own folder. The structure I am aiming for is:
sql
sqlc
generatedcode
store - (wraps it all together)
dynamicsqlfiles
This should clear up what a I mean by store: https://github.com/techschool/simplebank/blob/master/db/sqlc/store.go
package db
import (
"context"
"database/sql"
"fmt"
)
// Store defines all functions to execute db queries and transactions
type Store interface {
Querier
TransferTx(ctx context.Context, arg TransferTxParams) (TransferTxResult, error)
}
// SQLStore provides all functions to execute SQL queries and transactions
type SQLStore struct {
db *sql.DB
*Queries
}
// NewStore creates a new store
func NewStore(db *sql.DB) Store {
return &SQLStore{
db: db,
Queries: New(db),
}
}
I'm trying to run everything through that store (both generated and my functions), so I can make a call similar to the CreateUser function in this file (server.store.): https://github.com/techschool/simplebank/blob/master/api/user.go
arg := db.CreateUserParams{
Username: req.Username,
HashedPassword: hashedPassword,
FullName: req.FullName,
Email: req.Email,
}
user, err := server.store.CreateUser(ctx, arg)
if err != nil {
if pqErr, ok := err.(*pq.Error); ok {
switch pqErr.Code.Name() {
case "unique_violation":
ctx.JSON(http.StatusForbidden, errorResponse(err))
return
}
}
ctx.JSON(http.StatusInternalServerError, errorResponse(err))
return
}
I've tried creating something that houses another querier interface that embeds the generated one, then creating my own db.go that uses the generated DBTX interface but has its own Queries struct, and New function. It always gives me an error that the Queries struct I created aren't implementing the functions I made, despite having it implemented in one of the custom methods I made.
I deleted that branch, and have been clicking through the simplebank project linked above to see if I can find another way this could be done, or if I missed something. If it can't be done, that's okay. I'm just using this as a good opportunity to learn a little more about the language, and keep some code separated if possible.
UPDATE:
There were only a few pieces I had to change, but I modified the store.go to look more like:
// sdb is imported, but points to the generated Querier
// Store provides all functions to execute db queries and transactions
type Store interface {
sdb.Querier
DynamicQuerier
}
// SQLStore provides all functions to execute SQL queries and transactions
type SQLStore struct {
db *sql.DB
*sdb.Queries
*dynamicQueries
}
// NewStore creates a new Store
func NewStore(db *sql.DB) Store {
return &SQLStore{
db: db,
Queries: sdb.New(db),
dynamicQueries: New(db),
}
}
Then just created a new Querier and struct for the methods I would be creating. Gave them their own New function, and tied it together in the above. Before, I was trying to figure out a way to reuse as much of the generated code as possible, which I think was the issue.
Why I wanted the Interface:
I wanted a structure that separated the files I would be working in more from the files that I would never touch (generated). This is the new structure:
I like how the generated code put everything in the Querier interface, then checked that anything implementing it satisfied all of the function requirements. So I wanted to replicate that for the dynamic portion which I would be creating on my own.
It might be complicating it a bit more than it would 'NEED' to be, but it also provides an additional set of error checking that is nice to have. And in this case, even while maybe not necessary, it ended up being doable.
Would it be possible to wrap both what sqlc generates AND any queries that a developer creates (dynamic queries) into a single querier?
If I'm understanding your question correctly I think that you are looking for something like the below (playground):
package main
import (
"context"
"database/sql"
)
// Sample SQL C Code
type DBTX interface {
ExecContext(context.Context, string, ...interface{}) (sql.Result, error)
PrepareContext(context.Context, string) (*sql.Stmt, error)
QueryContext(context.Context, string, ...interface{}) (*sql.Rows, error)
QueryRowContext(context.Context, string, ...interface{}) *sql.Row
}
type Queries struct {
db DBTX
}
func (q *Queries) DeleteAccount(ctx context.Context, id int64) error {
// _, err := q.db.ExecContext(ctx, deleteAccount, id)
// return err
return nil // Pretend that this always works
}
type Querier interface {
DeleteAccount(ctx context.Context, id int64) error
}
//
// Your custom "dynamic" queries
//
type myDynamicQueries struct {
db DBTX
}
func (m *myDynamicQueries) GetDynamicResult(ctx context.Context) error {
// _, err := q.db.ExecContext(ctx, deleteAccount, id)
// return err
return nil // Pretend that this always works
}
type myDynamicQuerier interface {
GetDynamicResult(ctx context.Context) error
}
// Combine things
type allDatabase struct {
*Queries // Note: You could embed this directly into myDynamicQueries instead of having a seperate struct if that is your preference
*myDynamicQueries
}
type DatabaseFunctions interface {
Querier
myDynamicQuerier
}
func main() {
// Basic example
var db DatabaseFunctions
db = getDatabase()
db.DeleteAccount(context.Background(), 0)
db.GetDynamicResult(context.Background())
}
// getDatabase - Perform whatever is needed to connect to database...
func getDatabase() allDatabase {
sqlc := &Queries{db: nil} // In reality you would use New() to do this!
myDyn := &myDynamicQueries{db: nil} // Again it's often cleaner to use a function
return allDatabase{Queries: sqlc, myDynamicQueries: myDyn}
}
The above is all in one file for simplicity but could easily pull from multiple packages e.g.
type allDatabase struct {
*generatedcode.Queries
*store.myDynamicQueries
}
If this does not answer your question then please show one of your failed attempts (so we can see where you are going wrong).
One general comment - do you really need the interface? A common recommendation is "Accept interfaces, return structs". While this may not always apply I suspect you may be introducing interfaces where they are not really necessary and this may add unnecessary complexity.
I thought that the Store, which was housing both Queriers, was tying it all together. Can you explain a little with the example above (in the question post) why it's not necessary? How does SQLStore get access to all of the Querier interface functions?
The struct SQLStore is what is "tying it all together". As per the Go spec:
Given a struct type S and a named type T, promoted methods are included in the method set of the struct as follows:
If S contains an embedded field T, the method sets of S and *S both include promoted methods with receiver T. The method set of *S also includes promoted methods with receiver *T.
If S contains an embedded field *T, the method sets of S and *S both include promoted methods with receiver T or *T.
So an object of type SQLStore:
type SQLStore struct {
db *sql.DB
*sdb.Queries
*dynamicQueries
}
var foo SQLStore // Assume that we are actually providing values for all fields
Will implement all of the methods of sdb.Queries and, also, those in dynamicQueries (you can also access the sql.DB members via foo.db.XXX). This means that you can call foo.AddAccountBalance() and foo.MyGenericQuery() (assuming that is in dynamicQueries!) etc.
The spec says "In its most basic form an interface specifies a (possibly empty) list of methods". So you can think of an interface as a list of functions that must be implemented by whatever implementation (e.g. struct) you assign to the interface (the interface itself does not implement anything directly).
This example might help you understand.
Hopefully that helps a little (as I'm not sure which aspect you don't understand I'm not really sure what to focus on).

Go - Same interface to handle multiple types

I am dealing with multiple vendor APIs which allow creating Device records but as expected they represent devices differently. Basic example (focusing on difference of ID types among vendors) -
Vendor1#Device uses integer IDs { ID: <int>, ...vendor1 specific details }
Vendor2#Device uses UUIDs { UUID: <string>, ...vendor2 specific details }
Since, the structures vary among vendors, I am planning to save these (device records) in a MongoDB collection so I have created the following interface to use from application code -
type Device struct {
Checksum string
RemoteID ?? # vendor1 uses int and vendor2 uses uuid
}
type DataAccessor interface {
FindDeviceByChecksum(string) (Device, error)
InsertDevice(Device) (bool, error)
}
This will be used from a orchestration/service object, like -
type Adapter interface {
AssignGroupToDevice(GroupID, DeviceRemoteID ??) (bool, error)
}
type Orchestrator struct {
da DataAccessor
vendorAPI Adapter
}
// Inside orchestrator#Assign method
device, _ := o.da.FindDeviceByChecksum("checksum from arg")
.
.
o.vendorAPI.AssignGroupToDevice("groupID from arg", device.RemoteID ??)
// The above method calls out vendor's HTTP API and passes the json payload built from args
//
As you can see, I can't put a type for RemoteID or DeviceRemoteID. What are my options to handle this pragmatically? An empty interface would have me writing type switches in the interface implementation? Generics? or something else? I am confused.
Your application code should not care at all about the actual vendors and their APIs.
Try to define some core entity package that you will use in your domain, in your application. This can be anything you decide and shouldn't be dependent on external dependencies.
The service will define the interface it needs in order to do the appropiate BL (Find, Insert, Assign group id)
For example:
Entity package can be device
package device
type Device struct {
Checksum string
RemoteID string
}
Note that you can define the RemoteID as a string. For each vendor, you will have an adapter that has knowledge of both the application entities and the external vendor API. Each adapter will need to implement the interface the service requires.
type DeviceRepository interface {
FindDeviceByChecksum(string) (device.Device, error)
InsertDevice(device.Device) (bool, error)
}
type VendorAdapter interface {
AssignGroupToDevice(GroupID, DeviceRemoteID string) (bool, error)
}
type Orchestrator struct {
deviceRepo DeviceRepository
vendorAdapter VendorAdapter
}
// Inside orchestrator#Assign method
device, err := o.deviceRepo.FindDeviceByChecksum("checksum from arg")
if err != nil {...}
.
.
o.vendorAdapter.AssignGroupToDevice("groupID from arg", device.RemoteID)
//
You can note here a few things:
Defined the interfaces in the service package. (Define interfaces where you are using/requiring them)
DeviceRepository: This is the data layer, responsible for persisting your entity (into mongo) (repo is just a convention I'm used to, it doesn't have to be called repo :)
VendorAdapter: adapter to the actual vendor. The service has no knowledge about the implementation of this vendor adapter. It doesn't care what it does with the remote-id. The vendor API that uses int will just convert the string to int.
Of course naming is optional. You can use DeviceProvider instead of VendoreAdapter for example, anything that will make sense to you and your team.
This is the whole point of the adapters, they are converting from/to entity to the external. From application language into the specific external language and vice versa. In some way, a repository is just an adapter to the database.
Edit: For example, the vendor adapter with the int remote-id will just convert the string to int (Unless I'm totally missing the point lol:) :
package vendor2
type adapter struct {
...
}
func (a *adapter) AssignGroupToDevice(groupID, deviceRemoteID string) error {
vendor2RemoteID, err := strconv.Atoi(deviceRemoteID)
if err != nil {...}
// send to vendor api2, using the vendor2RemoteID which is int
}
Edit2:
If there is another field that is different between these vendors and is not primitive, you can still use string. The vendor's adapter will just need to marshal the string to the specific vendor's custom payload.
Another way is of course do as #blackgreen said and save it as interface{}
You need to make a few decisions:
How you will serialize it to the database
How you will serialize it in the vendor's adapters
Are you using it in the application? Meaning is the application has knowledge or is agnostic to the value. (if so, you probably won't want to save it as a string. maybe :)
The repo - will save it as JSON to the DB
The vendor adapter - will convert this interface{} to the actual payload the API needs
But there are many many other options to deal with dynamic types, so sorry I didn't quite answer your question. The core of the solution depends on whether the application uses it, or it is only data for the vendor adapter to use.

Passing default/static values from server to client

I have an input type with two fields used for filtering a query on the client.
I want to pass the default values (rentIntervalLow + rentIntervalHigh) from server to the client, but don't know how to do it.
Below is my current code. I've come up with two naïve solutions:
Letting the client introspect the whole schema.
Have a global config object, and create a querable Config type with a resolver that returns the config object values.
Any better suggestions than the above how to make default/config values on the server accessible to the client?
// schema.js
const typeDefs = gql`
input FilteringOptions {
rentIntervalLow: Int = 4000
rentIntervalHigh: Int = 10000
}
type Home {
id: Int
roomCount: Int
rent: Int
}
type Query {
allHomes(first: Int, cursor: Int, input: FilteringOptions): [Home]
}
`
export default typeDefs
I'm using Apollo Server 2.8.1 and Apollo React 3.0.
It's unnecessary to introspect the whole schema to get information about a particular type. You can just write a query like:
query {
__type(name:"FilteringOptions") {
inputFields {
name
description
defaultValue
}
}
}
Default values are values that will be used when a particular input value is omitted from the query. So to utilize the defaults, the client would pass an empty object to the input argument of the allHomes field. You could also give input a default value of {}, which would allow the client not to provide the input argument at all, while still relaying the min and max default values to the resolver.
If, however, your intent is to provide the minimum and maximum values to your client in order to drive some client-specific logic (like validation, drop down menu values, etc.), then you should not utilize default values for this. Instead, this information should be queried directly by the client, using, for example, a Config type like you suggested.

How to ensure uniqueness of a property in a NoSQL record ( Golang + tiedot )

I'm working on a simple application written in golang, using tiedot as NoSQL database engine.
I need to store some users in the database.
type User struct {
Login string
PasswordHash string
Salt string
}
Of course two users cannot have the same login, and - as this engine does not provide any transaction mechanism - I'm wondering how to ensure that there's no duplicated login in the database when writing.
I first thought that I could just search for user by login before inserting, but as the database will be
used concurently, it is not reliable.
Maybe I could wait for a random time and if there is another user with the same login in the collection, delete it, but that does not sound reliable either.
Is this even possible, or should I switch to a database engine that support transactions ?
Below is my solution. It is not Tiedot specific, but It uses CQRS and can be applied to various DBs.
You can also have other benefits using it, such as caching and bulk write (in case DB supports it) to prevent asking DB on every request.
package main
import (
"sync"
"log"
"errors"
)
type User struct {
Login string
PasswordHash string
Salt string
}
type MutexedUser struct {
sync.RWMutex
Map map[string]User
}
var u = &MutexedUser{}
func main() {
var user User
u.Sync()
// Get new user here
//...
if err := u.Insert(user); err != nil {
// Ask to provide new login
//...
log.Println(err)
}
}
func (u *MutexedUser) Insert(user User) (err error) {
u.Lock()
if _, ok := u.Map[user.Login]; !ok {
u.Map[user.Login] = user
// Add user to DB
//...
u.Unlock()
return err
}
u.Unlock()
return errors.New("duplicated login")
}
func (u *MutexedUser) Read(login string) User {
u.RLock()
value := u.Map[login]
u.RUnlock()
return value
}
func (u *MutexedUser) Sync() (err error) {
var users []User
u.Lock()
defer u.Unlock()
// Read users from DB
//...
u.Map = make(map[string]User)
for _, user := range users {
u.Map[user.Login] = user
}
return err
}
I first thought that I could just search for user by login before inserting, but as the database will be used concurently, it is not reliable.
Right, it creates a race condition. The only way to resolve this is:
Lock the table
Search for the login
Insert if the login is not found
Unlock the table
Table-locks are not a scalable solution, because it creates an expensive bottleneck in your application. It's why non-transactional storage engines like MySQL's MyISAM are being phased out. It's why MongoDB has to use clusters to scale up.
It can work if you have a small dataset size and a light amount of concurrency, so perhaps it's adequate for login creation on a lightly-used website. New logins probably aren't created so frequently that they need to scale up so much.
But users logging in, or password changes, or other changes to account attributes, do happen more frequently.
The solution for this is to make this operation atomic, to avoid race conditions. For example, attempt the insert and have the database engine verify uniqueness and reject the insert if it violates that constraint.
Unfortunately, I don't see any documentation in tiedot that shows that it supports a unique constraint or a uniqueness enforcement on indexes.
Tiedot is 98% written by a single developer, in a period of about 2 years (May 2013 - April 2015). Very little activity since then (see https://www.openhub.net/p/tiedot). I would consider tiedot to be an experimental project, unlikely to expand in feature set.

Golang with couchbase integration issue

I'm using golang with couchbase integration component called go-couchbase. It's enable to connect with couchbase and retrieve data. However I have a problem to send start key and skip value and limit value with this API. Because there is no functionality found by myself.
url : - github.com/couchbaselabs/go-couchbase
Please let me know any method to send these values to couchbase and retrieve data?
That start key is only mentioned once, as a parameter to a couhbase view:
// View executes a view.
//
// The ddoc parameter is just the bare name of your design doc without
// the "_design/" prefix.
//
// Parameters are string keys with values that correspond to couchbase
// view parameters. Primitive should work fairly naturally (booleans,
// ints, strings, etc...) and other values will attempt to be JSON
// marshaled (useful for array indexing on on view keys, for example).
//
// Example:
//
// res, err := couchbase.View("myddoc", "myview", map[string]interface{}{
// "group_level": 2,
// "start_key": []interface{}{"thing"},
// "end_key": []interface{}{"thing", map[string]string{}},
// "stale": false,
// })
func (b *Bucket) View(ddoc, name string, params map[string]interface{}) (ViewResult, error) {
I suppose the skip one (mentioned in "Pagination with Couchbase") is just another parameter to add to the params map[string]interface{}.

Resources