Is it safe to just ping the database to check if my golang app is still connected or is there a better solution than this? I've read somewhere that we should not use .ping() to determine a connection loss.
Thanks.
I would say Ping() is the way to do it if you need to test the connection separate of running queries probably at program start only.
Normally I just trust in the fact that database/sql will automatically try to reconnect in case if you are executing queries against DB and connection fails. So you could just use Open to check DB connection args are correct and trust query to return an error in case connection is lost.
People say Ping() can cause race conditions but cannot demonstrate me how or provide a suitable alternative where connection test needed.
This is how Gorm (widely used Golang ORM project) does it:
// Send a ping to make sure the database connection is alive.
if d, ok := dbSQL.(*sql.DB); ok {
if err = d.Ping(); err != nil {
d.Close()
}
}
return
https://github.com/jinzhu/gorm
Probably better to execute a simple query and check the result. The Ping() method is safe to use, but is optional to implement by database drivers.
See Ping()
You can ping the database, test it, close the connection, and test it again. Example:
database.go
package main
import (
"log"
"database/sql"
_ "modernc.org/sqlite"
)
func sqliteConnect() (sqliteDB *sql.DB) {
// driver and data source names
sqliteDB, err := sql.Open("sqlite", "./localdatabase.db")
if err != nil {
log.Fatalf("Cannot connect to sqlite DB: %v", err)
}
return sqliteDB
}
database_test.go
package main
import (
"testing"
)
func TestSqliteConnect(t *testing.T) {
db := sqliteConnect()
db.SetMaxIdleConns(0)
t.Run("Ping database", func(t *testing.T) {
if db.Ping() != nil {
t.Errorf("Unable to ping sqlite database ... %v", db.Ping())
}
})
db.Close()
t.Run("Check Connections", func(t *testing.T) {
if db.Stats().OpenConnections != 0 {
t.Errorf("Unable to close database connections ... %v", db.Stats().OpenConnections)
}
})
}
Related
Most Go/GORM examples I've seen show Automigrate being called immediately after opening the database connection, including GORM documentation here. For API services, this would be an expensive/wanted call with every API requests. So, I assume, for API services, Automigrate should be removed from regular flow and handled separately. Is my understanding correct?
From GORM Documentation
...
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&Product{})
...
It wouldn't happen on every API request. Not even close. It'd happen every time the application is started, so basically: connect to the DB in main, and run AutoMigrate there. Pass the connection as a dependency to your handlers/service packages/wherever you need them. The HTTP handler can just access it there.
Basically this:
package main
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
fmt.Printf("Failed to connect to DB: %v", err)
os.Exit(1)
}
// see below how this is handled
fRepo := foo.New(db) // all repos here
fRepo.Migrate() // this handles migrations
// create request handlers
fHandler := handlers.NewFoo(fRepo) // migrations have already been handled
mux := http.NewServeMux()
mux.HandleFunc("/foo/list", fHandler.List) // set up handlers
// start server etc...
}
Have the code that interacts with the DB in some package like this:
package foo
// The DB connection interface as you use it
type Connection interface {
Create()
Find()
AutoMigrate(any)
}
type Foo struct {
db Connection
}
func New(db Connection) *Foo {
return &Foo{
db: db,
}
}
func (f *Foo) Migrate() {
f.db.AutoMigrate(&Stuff{}) // all types this repo deals with go here
}
func (f *Foo) GetAll() ([]Stuff, error) {
ret := []Stuff{}
res := f.db.Find(&ret)
return ret, res.Error
}
Then have your handlers structured in a sensible way, and provide them with the repository (aka foo package stuff):
package handlers
type FooRepo interface {
GetAll() ([]Stuff, error)
}
type FooHandler struct {
repo FooRepo
}
func NewFoo(repo FooRepo) *FooHandler {
return &FooHandler{
repo: repo,
}
}
func (f *FooHandler) List(res http.ResponseWriter, req *http.Request) {
all, err := f.repo.GetAll()
if err != nil {
res.WriteHeader(http.StatusInternalServerError)
io.WriteString(w, err.Error())
return
}
// write response as needed
}
Whenever you deploy an updated version of your application, the main function will call AutoMigrate, and the application will handle requests without constantly re-connecting to the DB or attempting to handle migrations time and time again.
I don't know why you'd think that your application would have to run through the setup for each request, especially given that your main function (or some function you call from main) explicitly creates an HTTP server, and listens on a specific port for requests. The DB connection and subsequent migrations should be handled before you start listening for requests. It's not part of handling requests, ever...
var db *pgx.Conn //database
func ConnectCockroachDB() {
var dbUri string = "testurl"
conn, err := pgx.Connect(context.Background(), dbUri)
if err != nil {
fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)
os.Exit(1)
}
db = conn
}
//returns a handle to the DB object
func GetDB() *pgx.Conn {
if db == nil {
ConnectCockroachDB()
}
return db
}
but after i restart the DB, the application can not connect automatically. the error is "conn closed". i need to restart the application to make the app working
to just answer your question:
func GetDB() *pgx.Conn {
if db == nil || db.IsClosed() {
ConnectCockroachDB()
}
return db
}
However, this code is not safe for use from multiple goroutines. There should be a lock for accessing and updating the global db instance to avoid race conditions.
Since you're using pgx, you might want to try using pgxpool to connect: https://pkg.go.dev/github.com/jackc/pgx/v4/pgxpool
It will automatically take care of creating a new connection if the old one got closed. It also already supports concurrent usage.
I have a requirement where my application talks to different
databases . How do i manage connections in the gorm. Is there any
way gorm supports connection management for multiple database. or i
need to create map which holds all database connections.
if val, ok := selector.issure_db[issuer]; ok {
return val , nil;
} else {
var dbo *db.DB;
selector.mu.Lock()
dbo, err := db.NewDb(Config)
if err != nil {
boot.Logger(ctx).Fatal(err.Error())
}
selector.issure_db[issuer] = dbo;
selector.mu.Unlock()
return repo ,nil;
}
Is there is a better way to do this?
You can use the dbresolver plugin for GORM. It manages multiple sources and replicas and maintains an underlying connection pool for the group. You can even map models in your app to the correct database using the config.
Example from the docs:
import (
"gorm.io/gorm"
"gorm.io/plugin/dbresolver"
"gorm.io/driver/mysql"
)
db, err := gorm.Open(mysql.Open("db1_dsn"), &gorm.Config{})
db.Use(dbresolver.Register(dbresolver.Config{
// use `db2` as sources, `db3`, `db4` as replicas
Sources: []gorm.Dialector{mysql.Open("db2_dsn")},
Replicas: []gorm.Dialector{mysql.Open("db3_dsn"), mysql.Open("db4_dsn")},
// sources/replicas load balancing policy
Policy: dbresolver.RandomPolicy{},
}).Register(dbresolver.Config{
// use `db1` as sources (DB's default connection), `db5` as replicas for `User`, `Address`
Replicas: []gorm.Dialector{mysql.Open("db5_dsn")},
}, &User{}, &Address{}).Register(dbresolver.Config{
// use `db6`, `db7` as sources, `db8` as replicas for `orders`, `Product`
Sources: []gorm.Dialector{mysql.Open("db6_dsn"), mysql.Open("db7_dsn")},
Replicas: []gorm.Dialector{mysql.Open("db8_dsn")},
}, "orders", &Product{}, "secondary"))
You can create a package called database and write an init function in the init.go file which can create a DB object to connect with database for each database you have. And you can use this db object everywhere in the application which would enable connection pooling as well.
init.go
var db *gorm.DB
func init() {
var err error
dataSourceName := fmt.Sprintf("%s:%s#tcp(%s:%d)/%s?charset=utf8&parseTime=True&loc=Local", dbUser, dbPassword, dbHost, dbPort, dbName)
db, err = gorm.Open("mysql", dataSourceName)
db.DB().SetConnMaxLifetime(10 * time.Second)
db.DB().SetMaxIdleConns(10)
//initialise other db objects here
}
users.go
func getFirstUser() (user User) {
db.First(&user)
return
}
PS> This solution would be efficient if you have to connect to 1 or 2 database. If you need to connect to multiple databases at the same time, you should be using dbresolver plugin.
Old Answer
You can write a separate function which returns current database connection object every time you call the function.
func getDBConnection(dbUser, dbPassword, dbHost, dbName string) (db *gorm.DB, err error) {
dataSourceName := fmt.Sprintf("%s:%s#tcp(%s:%d)/%s?charset=utf8&parseTime=True&loc=Local", dbUser, dbPassword, dbHost, dbPort, dbName)
db, err = gorm.Open("mysql", dataSourceName)
db.DB().SetConnMaxLifetime(10 * time.Second)
return
}
And call defer db.Close() everytime after you call the getDBConnection function.
func getFirstUser() (user User) {
db, _ := getDBConnection()
defer db.Close()
db.First(&user)
return
}
This way your connections will be closed every time after you have executed the query.
I use memcache for caching and the client I use is https://github.com/bradfitz/gomemcache. When I tried initiate new client with dummy/invalid server address and then pinging to it, it return no error.
package main
import (
"fmt"
m "github.com/bradfitz/gomemcache"
)
func main() {
o := m.New("dummy_adress")
fmt.Println(o.Ping()) // return no error
}
I think it suppose to return error as the server is invalid. What do I miss?
It looks like the New() call ignores the return value for SetServers:
func New(server ...string) *Client {
ss := new(ServerList)
ss.SetServers(server...)
return NewFromSelector(ss)
}
The SetServers() function will only set the server list to valid servers (in
your case: no servers) and the Ping() funtion will only ping servers that are
set, and since there are no servers set it doesn't really do anything.
This is arguably a feature; if you have 4 servers and one is down then that's
not really an issue. Even with just 1 server memcache is generally optional.
You can duplicate the New() logic with an error check:
ss := new(memcache.ServerList)
err := ss.SetServers("example.localhost:11211")
if err != nil {
panic(err)
}
c := memcache.NewFromSelector(ss)
err = c.Ping()
if err != nil {
panic(err)
}
Which gives:
panic: dial tcp 127.0.0.1:11211: connect: connection refused
I have an application that opens a lot routines. Lets say 2000 routines. Each routine needs access to DB, or at least needs update/select data from DB.
My current approach is the following:
Routine gets *gorm.DB with db.GetConnection(), this is the code of this function:
func GetConnection() *gorm.DB {
DBConfig := config.GetConfig().DB
db, err := gorm.Open("mysql", DBConfig.DBUser+":"+DBConfig.DBPassword+"#/"+DBConfig.DBName+"?charset=utf8mb4")
if err != nil {
panic(err.Error())
}
return db
}
then routines calls another function from some storage package and passes *gorm.DB to function and closes the connection, it looks like that:
dbConnection := db.GetConnection()
postStorage.UpdateSomething(dbConnection)
db.CloseConnection(dbConnection)
The above is only example, the main idea is that each routine opens new connection and I don't like it. Because it may overload the DB. In result I got the next MySQL error:
[mysql] 2020/07/16 19:34:26 packets.go:37: read tcp 127.0.0.1:44170->127.0.0.1:3306: read: connection reset by peer
The question is about good pattern how to use gorm package in multiroutines application ?
*gorm.DB is multi thread safe, and you could use one *gorm.DB in multi routines. You could init it once and get it whenever you want. Demo:
package db
var db *gorm.DB
fund init() {
DBConfig := config.GetConfig().DB
db, err := gorm.Open("mysql", DBConfig.DBUser+":"+DBConfig.DBPassword+"#/"+DBConfig.DBName+"?charset=utf8mb4")
if err != nil {
panic(err.Error())
}
}
func GetConnection() *gorm.DB {
return db;
}