Should database connections be opened and closed in every CRUD method? - go

I am using GORM ORM for Postgres access in a Go application.
I have got 4 functions, Create, Update, Delete, and Read in a database repository.
In each of these functions, I open a database connection, perform a CRUD operation and then close the connection just after the operation is performed as you will see here and here and in the code snippet below, using GORM
func (e *Example) Create(m *model.Example) (*model.Example, error) {
// open a database session
dbSession, err := e.OpenDB() //gorm.Open("postgres", connStr)
if err != nil {
log.Log(err)
return nil, err
}
// close database connection after operation is completed
defer dbSession.Close()
// create item
db := dbSession.Create(m)
if db.Error != nil {
return nil, db.Error
}
return m, nil
}
Is that the correct practice or should one database connection be shared in the whole application and let the ORM handle managing connections? as stated here?

You should reuse a DB connection as much as you can. Also gorm has a built-in connection pool, so, you don't need to manage the db handle. Simply share it amongst all goroutines and they can share the handle safely, allocating new connections as needed.

Related

golang pgconnection pool - how to test if the pool was initialized or empty

I am not fully familiar with golang patterns / best practices with respect to database connections. I am trying to implement a simple web service in go, using jackc/pgx connection pool.
In a file (let us call it users.go, which is a web service with user registration methods), I am having a connection pool object at the module level. Every time a request lands on the app, I want to test if the connection pool was initialized,if yes return the pool, else initialize the pool and return it.
I am trying to write code as below..
var pool Pool
// the request handler fn
func registerUser (c *gin.Context) {
// get connection pool and insert into db.
pool, err = getConnPool()
...
}
// the getConnPool function
func getConnPool () {
// How to test if the pool was not initialized???
if pool == nil {
// initialize
pool, err := pgxpool.Connect(context.Background(), os.Getenv("DATABASE_URL"))
return pool, err
}
return pool, nil
}
How to test the pool variable if it is connected / not nil etc?
I am not sure, if this is the correct approach in go, while I have used this kind of lazy initialization in NodeJS frequently. Should I initialize the pool in the main file, or is there an alternative like above?

GORM Automigrate on two databases/Shared databases

I am trying to implement very simple shard using GORM and DBResolver
Here is the code for it
func (mysqlDB *MySQLDB) InitDB() {
dsn_master := "root:rootroot#tcp(127.0.0.1:3306)/workspace_master?charset=utf8mb4&parseTime=True&loc=Local"
dsn_shard1 := "root:rootroot#tcp(127.0.0.1:3306)/workspace_shard1?charset=utf8mb4&parseTime=True&loc=Local"
db, err := gorm.Open(mysql.Open(dsn_master), &gorm.Config{
Logger: logger.Default.LogMode(logger.Info)})
if err != nil {
log.Println("Connection Failed to Open")
} else {
log.Println("Connection Established")
}
db.Use(dbresolver.Register(dbresolver.Config{
Sources: []gorm.Dialector{mysql.Open(dsn_shard1)}},
&models.WorkspaceGroup{}, "shard1"))
db.AutoMigrate(&models.Workspace{}, &models.WorkspaceMember{})
//db.AutoMigrate(&models.Workspace{}, &models.WorkspaceMember{}, &models.WorkspaceGroup{}, &models.GroupMember{})
db.Clauses(dbresolver.Use("shard1")).AutoMigrate(&models.WorkspaceGroup{}, &models.GroupMember{})
mysqlDB.Database = db
}
I am creating two databases workspace_master and workspace_shard1
Issue is that automigration is not working as expected. Shard Tables are not getting created in respective database. I have tried the commented code as well (automigrate having all the tables and setting db resolver earlier)
Expected Result:
Workspace and WorkspaceMember will get created in workspace_master database
WorkspaceGroup and GroupMember will get created in workspace_shard1 database
Current Result:
All tables are created in workspace_master database
However if I create WorkspaceGroup and GroupMember manually in workspace_shard1, any subsequent queries for create, select, delete etc is going correctly to workspace_shard1. So DBResolver seems to be working as expected.
Only issue is db.AutoMigrate is not working as expected. Can anyone suggest how it can be achieved?

How to ensure uniqueness of a property in a NoSQL record ( Golang + tiedot )

I'm working on a simple application written in golang, using tiedot as NoSQL database engine.
I need to store some users in the database.
type User struct {
Login string
PasswordHash string
Salt string
}
Of course two users cannot have the same login, and - as this engine does not provide any transaction mechanism - I'm wondering how to ensure that there's no duplicated login in the database when writing.
I first thought that I could just search for user by login before inserting, but as the database will be
used concurently, it is not reliable.
Maybe I could wait for a random time and if there is another user with the same login in the collection, delete it, but that does not sound reliable either.
Is this even possible, or should I switch to a database engine that support transactions ?
Below is my solution. It is not Tiedot specific, but It uses CQRS and can be applied to various DBs.
You can also have other benefits using it, such as caching and bulk write (in case DB supports it) to prevent asking DB on every request.
package main
import (
"sync"
"log"
"errors"
)
type User struct {
Login string
PasswordHash string
Salt string
}
type MutexedUser struct {
sync.RWMutex
Map map[string]User
}
var u = &MutexedUser{}
func main() {
var user User
u.Sync()
// Get new user here
//...
if err := u.Insert(user); err != nil {
// Ask to provide new login
//...
log.Println(err)
}
}
func (u *MutexedUser) Insert(user User) (err error) {
u.Lock()
if _, ok := u.Map[user.Login]; !ok {
u.Map[user.Login] = user
// Add user to DB
//...
u.Unlock()
return err
}
u.Unlock()
return errors.New("duplicated login")
}
func (u *MutexedUser) Read(login string) User {
u.RLock()
value := u.Map[login]
u.RUnlock()
return value
}
func (u *MutexedUser) Sync() (err error) {
var users []User
u.Lock()
defer u.Unlock()
// Read users from DB
//...
u.Map = make(map[string]User)
for _, user := range users {
u.Map[user.Login] = user
}
return err
}
I first thought that I could just search for user by login before inserting, but as the database will be used concurently, it is not reliable.
Right, it creates a race condition. The only way to resolve this is:
Lock the table
Search for the login
Insert if the login is not found
Unlock the table
Table-locks are not a scalable solution, because it creates an expensive bottleneck in your application. It's why non-transactional storage engines like MySQL's MyISAM are being phased out. It's why MongoDB has to use clusters to scale up.
It can work if you have a small dataset size and a light amount of concurrency, so perhaps it's adequate for login creation on a lightly-used website. New logins probably aren't created so frequently that they need to scale up so much.
But users logging in, or password changes, or other changes to account attributes, do happen more frequently.
The solution for this is to make this operation atomic, to avoid race conditions. For example, attempt the insert and have the database engine verify uniqueness and reject the insert if it violates that constraint.
Unfortunately, I don't see any documentation in tiedot that shows that it supports a unique constraint or a uniqueness enforcement on indexes.
Tiedot is 98% written by a single developer, in a period of about 2 years (May 2013 - April 2015). Very little activity since then (see https://www.openhub.net/p/tiedot). I would consider tiedot to be an experimental project, unlikely to expand in feature set.

Datastore: Create parent and child entity in an entity group transaction?

After reading about Google Datastore concepts/theory I started using the Go datastore package
Scenario:
Kinds User and LinkedAccount require that every user has one or more linked accounts (yay 3rd party login). For strong consistency, LinkedAccounts will be children of the associated User. New User creation then involves creating both a User and a LinkedAccount, never just one.
User creation seems like the perfect use case for transactions. If, say LinkedAccount creation fails, the transaction rolls back an fails. This doesn't currently seem possible. The goal is to create a parent and then a child within a transaction.
According to docs
All Datastore operations in a transaction must operate on entities in
the same entity group if the transaction is a single group transaction
We want a new User and LinkedAccount to be in the same group, so to me it sounds like Datastore should support this scenario. My fear is that the intended meaning is that operations on existing entities in the same group can be performed in a single transaction.
tx, err := datastore.NewTransaction(ctx)
if err != nil {
return err
}
incompleteUserKey := datastore.NewIncompleteKey(ctx, "User", nil)
pendingKey, err := tx.Put(incompleteUserKey, user)
if err != nil {
return err
}
incompleteLinkedAccountKey := datastore.NewIncompleteKey(ctx, "GithubAccount", incompleteUserKey)
// also tried PendingKey as parent, but its a separate struct type
_, err = tx.Put(incompleteLinkedAccountKey, linkedAccount)
if err != nil {
return err
}
// attempt to commit
if _, err := tx.Commit(); err != nil {
return err
}
return nil
From the library source its clear why this doesn't work. PendingKey's aren't keys and incomplete keys can't be used as parents.
Is this a necessary limitation of Datastore or of the library? For those experienced with this type of requirement, did you just sacrifice the strong consistency and make both kinds global?
For Google-ability:
datastore: invalid key
datastore: cannot use pendingKey as type *"google.golang.org/cloud/datastore".Key
One thing to note is that transactions in the Cloud Datastore API can operate on up to 25 entity groups, but this doesn't answer the question of how to create two entities in the same entity group as part of a single transaction.
There are a few ways to approach this (note that this applies to any use of the Cloud Datastore API, not just the gcloud-golang library):
Use a (string) name for the parent key instead of having Datastore automatically assign a numeric ID:
parentKey := datastore.NewKey(ctx, "Parent", "parent-name", 0, nil)
childKey := datastore.NewIncompleteKey(ctx, "Child", parentKey)
Make an explicit call to AllocateIds to have the Datastore pick a numeric ID for the parent key:
incompleteKeys := [1]*datastore.Key{datastore.NewIncompleteKey(ctx, "Parent", nil)}
completeKeys, err := datastore.AllocateIDs(ctx, incompleteKeys)
if err != nil {
// ...
}
parentKey := completeKeys[0]
childKey := datastore.NewIncompleteKey(ctx, "Child", parentKey)

MGO and long running Web Services - recovery

I've written a REST web service that uses mongo as the backend data store. I was wondering at this stage (before deployment), what the best practices were, considering a service that essentially runs forever(ish).
Currently, I'm following this type of pattern:
// database.go
...
type DataStore struct {
mongoSession *mgo.Session
}
...
func (d *DataStore) OpenSession () {
... // read setup from environment
mongoSession, err = mgo.Dial(mongoURI)
if err != nil {}
...
}
func (d *DataStore) CloseSession() {...}
func (d *DataStore) Find (...) (results...) {
s := d.mongoSession.Copy()
defer s.Close()
// do stuff, return results
}
In main.go:
func main() {
ds := NewDataStore()
ds.OpenSession()
defer ds.CloseSession()
// Web Service Routes..
...
ws.Handle("/find/{abc}", doFindFunc)
...
}
My question is - what's the recommended practice for recovery from session that has timed out, lost connection (the mongo service provider I'm using is remote, so I assume that this will happen), so on any particular web service call, the database session may no longer work? How do people handle these cases to detect that the session is no longer valid and a "fresh" one should be established?
Thanks!
what you may want is to do the session .Copy() for each incoming HTTP request (with deffered .Close()), copy again from the new session in your handlers if ever needed..
connections and reconnections are managed by mgo, you can stop and restart MongoDB while making an HTTP request to your web service to see how its affected.
if there's a db connection problem while handling an HTTP request, a db operation will eventually timeout (timeout can be configured by using DialWithTimeout instead of the regular Dial, so you can respond with a 5xx HTTP error code in such case.

Resources