Infinite loop when db.Ping() is called - go

I am attempting to create a basic connection to a database. The problem happens when I try to test the connection with db.Ping(); everything works until I get to this line. The Ping sends the program into an infinite loop (the function call never returns), and I'm not sure how to go about fixing this.
package main
import (
"database/sql"
"fmt"
"html/template"
"net/http"
_ "github.com/lib/pq"
}
type Page struct {
Name string
DBStatus bool
}
const (
host = "localhost"
port = 8080
user = "username"
password = "password"
dbname = "GoTest"
)
func main() {
templates := template.Must(template.ParseFiles("templates/index.html"))
psqlInfo := fmt.Sprintf("host=%s port=%d user=%s "+
"password=%s dbname=%s sslmode=disable",
host, port, user, password, dbname)
db, err := sql.Open("postgres", psqlInfo)
if err != nil {
panic(err)
}
defer db.Close()
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
p := Page{Name: "Gopher"}
if name := r.FormValue("name"); name != "" {
p.Name = name
}
p.DBStatus = db.Ping() == nil //this point is reached but never returned
if err := templates.ExecuteTemplate(w, "index.html", p); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
})
fmt.Println(http.ListenAndServe(":8080", nil))
}
It seems I can connect to the database fine, as the sql.Open call doesn't return an error, and if I called the Ping outside of the http server handle function, it also returns just fine.
Any help would be greatly appreciated!

Your database configurations are wrong. It's pointing to the Golang server port 8080. It should point to the pgsql port(default 5432)

Related

Why am i getting connection connection closed before server preface received in go?

I am trying to setup a rpc server and a proxy HTTP server over GRPC server on same port using grpc-gateway. Wierdly some times i am getting failed to receive server preface within timeout error randomly. Most of the times it happens on service restarts. It starts working and returns proper response after couple of retries. I am not sure what's happening. Can somebody help me out ? Here is the service startup snippet
func makeHttpServer(conn *grpc.ClientConn) *runtime.ServeMux {
router := runtime.NewServeMux()
if err := services.RegisterHealthServiceHandler(context.Background(), router, conn); err != nil {
log.Logger.Error("Failed to register gateway", zap.Error(err))
nricher
if err := services.RegisterConstraintsServiceHandler(context.Background(), router, conn); err != nil {
log.Logger.Error("Failed to register gateway", zap.Error(err))
}
return router
}
func makeGrpcServer(address string) (*grpc.ClientConn, *grpc.Server) {
grpcServer := grpc.NewServer()
services.RegisterHealthServiceServer(grpcServer, health.Svc{})
services.RegisterABCServer(grpcServer, ABC.Svc{})
conn, err := grpc.DialContext(
context.Background(),
address,
grpc.WithInsecure(),
)
if err != nil {
log.Logger.Error("Failed to dial server", zap.Error(err))
}
return conn, grpcServer
}
func httpGrpcRouter(grpcServer *grpc.Server, httpHandler *runtime.ServeMux, listener net.Listener) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.ProtoMajor == 2 {
grpcServer.Serve(listener)
} else {
httpHandler.ServeHTTP(w, r)
}
})
}
func Start() error {
conf := config.Get()
address := fmt.Sprintf("%s:%d", conf.ServerHost, conf.ServerPort)
listener, err := net.Listen("tcp", address)
if err != nil {
log.Logger.Fatal("failed to listen: %v", zap.Error(err))
}
conn, grpcServer := makeGrpcServer(address)
router := makeHttpServer(conn)
log.Logger.Info("Starting server on address : " + address)
err = http.Serve(listener, httpGrpcRouter(grpcServer, router, listener))
return err
}
Try wrapping your router with h2c.NewHandler so the the http.Serve() call looks as follows:
err = http.Serve(listener, h2c.NewHandler(
httpGrpcRouter(grpcServer, router, listener),
&http2.Server{})
)

CRUD operations on Redshift databases using Golang

Could you please give me some explanations and some code examples on how it would be done (ex: creating tables and inserting data) ?
Which library would you advise me to use ?
Thanks !
Please note the side-effect import of github.com/lib/pq
After this queries can be run by db.Query() or db.Exec()
https://golang.org/pkg/database/sql/#example_DB_Query
https://golang.org/pkg/database/sql/#pkg-examples
import (
_ "github.com/lib/pq"
"database/sql"
"fmt"
)
func MakeRedshfitConnection(username, password, host, port, dbName string) (*sql.DB, error) {
url := fmt.Sprintf("sslmode=require user=%v password=%v host=%v port=%v dbname=%v",
username,
password,
host,
port,
dbName)
var err error
var db *sql.DB
if db, err = sql.Open("postgres", url); err != nil {
return nil, fmt.Errorf("redshift connect error : (%v)"), err
}
if err = db.Ping(); err != nil {
return nil, fmt.Errorf("redshift ping error : (%v)", err)
}
return db, nil
}

Missing symbols when importing `github.com/influxdb/influxdb/client/v2` package

Setting up a web socket on google cloud in Golang, and import code that works fine on my local machine does not work on the cloud.
I have:
import "github.com/influxdb/influxdb/client/v2"
and have run
go get "github.com/influxdb/influxdb/client/v2"
Upon running go run server.go I get:
# command-line-arguments
./pi_server.go:47: undefined: client.NewClient
./pi_server.go:47: undefined: client.Config
Full code below, excluding const declarations and html:
package main
import (
"flag"
"html/template"
"log"
"net/http"
"github.com/gorilla/websocket"
"fmt"
"net/url"
"github.com/influxdb/influxdb/client/v2"
"time"
)
var addr = flag.String("addr", "localhost:8080", "http service address")
var upgrader = websocket.Upgrader{} // use default options
func echo(w http.ResponseWriter, r *http.Request) {
//Influx init
u,err := url.Parse("http://localhost:8086")
checkError(err)
influx_c := client.NewClient(client.Config{
URL: u,
Username: username,
Password: password,
})
bp,err := client.NewBatchPoints(client.BatchPointsConfig{
Database: MyDB,
Precision: "s",
})
tags := map[string]string{"my_sensor_id": my_sensor_id}
//end influx init
c, err := upgrader.Upgrade(w, r, nil)
if err != nil {
log.Print("upgrade:", err)
return
}
defer c.Close()
for {
mt, message, err := c.ReadMessage()
if err != nil {
log.Println("read:", err)
break
}
log.Printf("recv: %s", message)
/*
write to influx here
*/
fields := map[string]interface{}{
"random_int": message,
"other_stuff": 69696,
}
pt,err := client.NewPoint("test_collection", tags, fields, time.Now())
checkError(err)
bp.AddPoint(pt)
influx_c.Write(bp)
err = c.WriteMessage(mt, message)
if err != nil {
log.Println("write:", err)
break
}
}
}
func home(w http.ResponseWriter, r *http.Request) {
homeTemplate.Execute(w, "ws://"+r.Host+"/echo", )
}
func main() {
flag.Parse()
log.SetFlags(0)
http.HandleFunc("/echo", echo)
http.HandleFunc("/", home)
log.Fatal(http.ListenAndServe(*addr, nil))
}
You local machine has a version of github.com/influxdb/influxdb/client/v2 before this commit. Your cloud server is fetching a more recent version of the package.
To fix the issue, run
go get -u github.com/influxdb/influxdb/client/v2
on your local machine to get the latest version of the package. Update the application code to use the new function and type names:
influx_c := client.NewHTTPClient(client.HTTPConfig{
URL: u,
Username: username,
Password: password,
})
Nailed it, thanks! Also note from following code:
influx_c,err := client.NewHTTPClient(client.HTTPConfig{
Addr: "http://localhost:8086",
Username: username,
Password: password,
})
They changed URL field to Addr, with is a string literal instead of a net/url object

Boltdb-key-Value Data Store purely in Go

Bolt obtains a file lock on the data file so multiple processes cannot open the same database at the same time. Opening an already open Bolt database will cause it to hang until the other process closes it.
As this is the case,is there any connection pooling concept like various clients connecting and accessing the database at the same time.? Is this possible in boltdb?Like there are various connections reading and writing in the database at the same time.How it can be implemented?
A Bolt database is usually embedded into a larger program and is not used over the network like you would with shared databases (think SQLite vs MySQL). Using Bolt is a bit like having a persistent map[[]byte][]byte if that were possible. Depending on what you are doing, you might want to just use something like Redis.
That said, if you need to use Bolt this way, it is not very difficult to wrap with a simple server. Here is an example that writes/reads keys from a Bolt DB over HTTP. You can use Keep-Alive for connection pooling.
Code at: https://github.com/skyec/boltdb-server
package main
import (
"flag"
"fmt"
"io/ioutil"
"log"
"net/http"
"time"
"github.com/boltdb/bolt"
"github.com/gorilla/mux"
)
type server struct {
db *bolt.DB
}
func newServer(filename string) (s *server, err error) {
s = &server{}
s.db, err = bolt.Open(filename, 0600, &bolt.Options{Timeout: 1 * time.Second})
return
}
func (s *server) Put(bucket, key, contentType string, val []byte) error {
return s.db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucketIfNotExists([]byte(bucket))
if err != nil {
return err
}
if err = b.Put([]byte(key), val); err != nil {
return err
}
return b.Put([]byte(fmt.Sprintf("%s-ContentType", key)), []byte(contentType))
})
}
func (s *server) Get(bucket, key string) (ct string, data []byte, err error) {
s.db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte(bucket))
r := b.Get([]byte(key))
if r != nil {
data = make([]byte, len(r))
copy(data, r)
}
r = b.Get([]byte(fmt.Sprintf("%s-ContentType", key)))
ct = string(r)
return nil
})
return
}
func (s *server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
if vars["bucket"] == "" || vars["key"] == "" {
http.Error(w, "Missing bucket or key", http.StatusBadRequest)
return
}
switch r.Method {
case "POST", "PUT":
data, err := ioutil.ReadAll(r.Body)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
err = s.Put(vars["bucket"], vars["key"], r.Header.Get("Content-Type"), data)
w.WriteHeader(http.StatusOK)
case "GET":
ct, data, err := s.Get(vars["bucket"], vars["key"])
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Add("Content-Type", ct)
w.Write(data)
}
}
func main() {
var (
addr string
dbfile string
)
flag.StringVar(&addr, "l", ":9988", "Address to listen on")
flag.StringVar(&dbfile, "db", "/var/data/bolt.db", "Bolt DB file")
flag.Parse()
log.Println("Using Bolt DB file:", dbfile)
log.Println("Listening on:", addr)
server, err := newServer(dbfile)
if err != nil {
log.Fatalf("Error: %s", err)
}
router := mux.NewRouter()
router.Handle("/v1/buckets/{bucket}/keys/{key}", server)
http.Handle("/", router)
log.Fatal(http.ListenAndServe(addr, nil))
}
There is no connection pooling concept in boltdb, because there is no connection. It is not a client/server database, it is an embedded database (like sqlite or Berkeley-DB).
Boltdb is designed so that multiple goroutines of the same process can access the database at the same time (using different transactions). The model is single writer, multiple readers. Boltdb is not designed to support accesses from multiple processes.
If you need a Go program to use an embedded database supporting access from multiple processes at the same time, you may want to have a look at the wrappers over LMDB, such as:
https://github.com/szferi/gomdb
https://github.com/armon/gomdb

Database connection best practice

I've an app that uses net/http. I register some handlers with http that need to fetch some stuff from a database before we can proceed to writing the response and be done with the request.
My question is in about which the best pratice is to connect to this database. I want this to work at one request per minute or 10 request per second.
I could connect to database within each handler every time a request comes in. (This would spawn a connection to mysql for each request?)
package main
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
"net/http"
"fmt"
)
func main() {
http.HandleFunc("/",func(w http.ResponseWriter, r *http.Request) {
db, err := sql.Open("mysql","dsn....")
if err != nil {
panic(err)
}
defer db.Close()
row := db.QueryRow("select...")
// scan row
fmt.Fprintf(w,"text from database")
})
http.ListenAndServe(":8080",nil)
}
I could connect to database at app start. Whenever I need to use the database I Ping it and if it's closed I reconnect to it. If it's not closed I continue and use it.
package main
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
"net/http"
"fmt"
"sync"
)
var db *sql.DB
var mutex sync.RWMutex
func GetDb() *sql.DB {
mutex.Lock()
defer mutex.Unlock()
err := db.Ping()
if err != nil {
db, err = sql.Open("mysql","dsn...")
if err != nil {
panic(err)
}
}
return db
}
func main() {
var err error
db, err = sql.Open("mysql","dsn....")
if err != nil {
panic(err)
}
http.HandleFunc("/",func(w http.ResponseWriter, r *http.Request) {
row := GetDb().QueryRow("select...")
// scan row
fmt.Fprintf(w,"text from database")
})
http.ListenAndServe(":8080",nil)
}
Which of these ways are the best or is there another way which is better. Is it a bad idea to have multiple request use the same database connection?
It's unlikly I will create an app that runs into mysql connection limit, but I don't want to ignore the fact that there's a limit.
The best way is to create the database once at app start-up, and use this handle afterwards. Additionnaly, the sql.DB type is safe for concurrent use, so you don't even need mutexes to lock their use. And to finish, depending on your driver, the database handle will automatically reconnect, so you don't need to do that yourself.
var db *sql.DB
var Database *Database
func init(){
hostName := os.Getenv("DB_HOST")
port := os.Getenv("DB_PORT")
username := os.Getenv("DB_USER")
password := os.Getenv("DB_PASS")
database := os.Getenv("DB_NAME")
var err error
db, err = sql.Open("mysql", fmt.Sprintf("%s:%s#tcp(%s:%d)/%s", username, password, hostName, port, database))
defer db.Close()
if err != nil {
panic(err)
}
err = db.Ping()
if err != nil {
panic(err)
}
Database := &Database{conn: db}
}
type Database struct {
conn *sql.DB
}
func (d *Database) GetConn() *sql.DB {
return d.conn
}
func main() {
row := Database.GetConn().QueryRow("select * from")
}
I'd recommend make the connection to your database on init().
Why? cause init() is guaranteed to run before main() and you definitely want to make sure you have your db conf set up right before the real work begins.
var db *sql.DB
func GetDb() (*sql.DB, error) {
db, err = sql.Open("mysql","dsn...")
if err != nil {
return nil, err
}
return db, nil
}
func init() {
db, err := GetDb()
if err != nil {
panic(err)
}
err = db.Ping()
if err != nil {
panic(err)
}
}
I did not test the code above but it should technically look like this.

Resources