GORM pq too many connections - go

I am using GORM with my project, everything is good until I got an error that said:
pq: sorry, too many clients already
I just use the default configuration. The error happened after I did a lot of test requests on my application.
And the error is gone after I restart my application. So, I am thinking that the GORM connection is not released after I'm done with the query. I don't check it very deep enough on GORM code, I just ask here maybe someone has already experience about it?

The error message you are getting is a PostgreSQL error and not GORM. It is caused as you are opening the database connection more than once.
db, err := gorm.Open("postgres", "user=gorm dbname=gorm")
Should be initiated once and referred to after that.

sync.Once.Do(func() {
instance, err := gorm.Open("postgres",
"root:password#"+
"tcp(localhost:3306)/rav"+
"?charset=utf8&parseTime=True")
if err != nil {
log.Println("Connection Failed to Open")
return
}
log.Println("Connection Established here")
instance.DB().SetMaxIdleConns(10)
instance.LogMode(true)
})
You can restrict the connection to singleton function so the connection happens once even though it gets called multiple times.

Related

Why is Go connecting to a database synchronously?

I'm coming from a Node background and trying to get into Go, by looking at code examples.
I do find it weird that code is mostly synchronous - even things like connecting and communicating with the database, e.g.
func main() {
// Create a new client and connect to the server
client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(uri))
if err != nil {
panic(err)
}
}
Doesn't this block the thread until DB sends back a response? If not, how is that possible?
Yeah there's this difference:
In Node everything is not blocking until you say it otherwise, await or callabck.
In Go everything is blocking until you say it otherwise, go.

Retry on redis connection failure

Wondering why redigo decided not to export the errorConn type, which would allow applications to have specific error handling for connection failures. As implemented, applications have to handle these as generic errors.
For example, our application generally doesn't care if a single PUT fails, but if the issue is a redis connection failure or redis pool being exhausted, moving on to the next PUT (especially if it requires opening a new connection) is a bad idea. We should stop and retry (with exponential backoff) until the connection comes back.
Code example where redigo returns a generic error if the connection pool is exhausted
The lines of code in your link return two values of the respective types: (Conn, error).
if !p.Wait && p.MaxActive > 0 && p.active >= p.MaxActive {
p.mu.Unlock()
return errorConn{ErrPoolExhausted}, ErrPoolExhausted
}
The type Conn is an interface with an Err method.
// Err returns a non-nil value when the connection is not usable.
Err() error
So to obtain the underlying error, you can either:
call the Err method on the first return value; or
check the second error return value.
As a side note, the recommended way to compare errors is by using errors.Is and/or errors.As from the standard library errors package.

Snowflake Go Sessions Keep Terminating

I am using the gosnowflake 1.40 driver. I am seeing my sessions cycle after 2 queries as seen in the image below, less than 1 second apart.
Connection setup looks something like this:
dsn, err := sf.DSN(sfConfig)
if err != nil {
panic("cannot get snowflake session: " + err.Error())
}
DBSession, err = sql.Open("snowflake", dsn)
if err != nil {
panic("cannot get snowflake session: " + err.Error())
}
return DBSession, nil
I use the following query pattern inside a function:
result = dbSession.QueryRow(command)
This session cycling pattern is not ideal, as I'd like to be able to assume a role and run multiple commands. Can someone point me to what I can do to make the Snowflake sessions persist? I don't have this problem using the WebUI.
DB maintains a pool of connections. Each connection in the pool will have a unique session ID. From the documentation:
DB is a database handle representing a pool of zero or more underlying connections. It's safe for concurrent use by multiple goroutines.
The sql package creates and frees connections automatically; it also maintains a free pool of idle connections.
You have a couple options for bypassing the default behavior of cycling through the pool of connections:
Obtain a specific Conn instance
from the connection pool using
DB.Conn(). The documentation
specifically states:
Queries run on the same Conn will be run in the same database session.
Modify the connection pool parameters using
DB.SetMaxOpenConns().
I suspect that setting this to 1 will also obtain the desired behavior.
However, this introduces scalability/concurrency concerns that are
addressed by having a connection pool in the first place.
Note, I'm not familiar with the Snowflake driver in particular. There may be other options that the driver supports.

How do I get last connection error when dialling gRPC server?

I am having the following code:
dialCtx, cancel := context.WithTimeout(ctx, 120*time.Second)
defer cancel()
conn, err := grpc.DialContext(dialCtx, address,
grpc.WithTransportCredentials(creds),
grpc.WithKeepaliveParams(keepAlive),
grpc.WithBlock(),
)
if err != nil {
return fmt.Errorf("failed to connect to server: %v", err)
}
I am trying to create a connection with gRPC server. One important thing is that I am using WithBlock() which blocks the dial until the connection is ready or the context timeouts. Okay, but when the context timeouts, I don't get what was the connection problem, aka last connection error. I get context deadline exceeded.
I tried following:
Using grpc.FailOnNonTempDialError(true) - error is returned when service is not available, but when TLS verification fails, re-connection continues.
Using grpc.WithContextDialer(...) - does not work for me, because sometimes the initial dialling is successful, but if server certificates validation fails, whole connection is closed.
How can I get that last connection error?
After some more research, I decided to update the grpc package version. I was using v1.27.0 and the latest is v1.35.0. Between these version, the problem was fixed and a new dial option introduced:
grpc.WithReturnConnectionError()
It's a way better now, but there is room for improvement. Currently, lastError and the context error are combined like that:
conn, err = nil, fmt.Errorf("%v: %v", ctx.Err(), err)
The problem is that the underlying error's type is lost, so the only way to make some action, based on the error, is string comparing(which is not reliable).
I hope that answer will be useful.

How frequently should I be calling sql.Open in my program?

As the title says I don't know if having multiple sql.Open statements is a good or bad thing or what or if I should have a file with just an init that is something like:
var db *sql.DB
func init() {
var err error
db, err = sql.Open
}
just wondering what the best practice would be. Thanks!
You should at least check the error.
As mentioned in "Connecting to a database":
Note that Open does not directly open a database connection: this is deferred until a query is made. To verify that a connection can be made before making a query, use the Ping function:
if err := db.Ping(); err != nil {
log.Fatal(err)
}
After use, the database is closed using Close.
If possible, limit the number of opened connection to a database to a minimum.
See "Go/Golang sql.DB reuse in functions":
You shouldn't need to open database connections all over the place.
The database/sql package does connection pooling internally, opening and closing connections as needed, while providing the illusion of a single connection that can be used concurrently.
As elithrar points out in the comment, database.sql/#Open does mention:
The returned DB is safe for concurrent use by multiple goroutines and maintains its own pool of idle connections.
Thus, the Open function should be called just once.
It is rarely necessary to close a DB.
As mentioned here
Declaring *sql.DB globally also have some additional benefits such as SetMaxIdleConns (regulating connection pool size) or preparing SQL statements across your application.
You can use a function init, which will run even if you don't have a main():
var db *sql.DB
func init() {
db, err = sql.Open(DBparms....)
}
init() is always called, regardless if there's main or not, so if you import a package that has an init function, it will be executed.
You can have multiple init() functions per package, they will be executed in the order they show up in the code (after all variables are initialized of course).

Resources