What is the best practice for handling scope with dealing with shared connection resources to outside services in golang (RabbitMQ, database, etc)? For example, given this code using database/sql, pq, and http:
func main() {
db, err := sql.Open("postgres", "user=root dbname=root")
if err != nil {
log.Fatal(err)
}
http.HandleFunc("/", front_handler)
http.HandleFunc("/get", get_handler)
http.HandleFunc("/set", set_handler)
http.ListenAndServe(":8080", nil)
}
What's the best way to make the db object available to my registered handlers?
Do I put the db declaration outside the main scope (this would cause me unit testing problems in Python but might be okay here)?
Do I put the handler declarations inside the main scope (it doesn't seem like I'm allowed to nest functions)?
Is there an addressing scheme I can use to access the main scope (I'd do something like that in puppet)?
Some other option?
There's a lot of ways you could deal with this. Firstly, having opened the connection in this scope, you probably want to defer it's close here.
db, err := sql.Open("postgres", "user=root dbname=root")
if err != nil {
log.Fatal(err)
}
defer db.Close()
Which will ensure the connection gets cleaned up when you're leaving this scope. Regarding your handlers... It would be simple to write them as closures in the same scope as the connection so you can use it freely.
EDIT: To clarify, you said you don't think you can nest functions in main. You can do this with something like;
get_handler := func() {
return db.ReadTheData()
}
http.HandleFunc("get", get_handler)
It's common for most apps to start out with the DB handler at the global scope. sql.DB is defined to be safe for concurrent access, and therefor can be simultaneously used by all handlers that need it.
var db *sql.DB
func main() {
var err error
db, err = sql.Open("postgres", "user=root dbname=root")
if err != nil {
log.Fatal(err)
}
...
Related
This is kindof an extension of my previous question Reuse log client in interceptor for Golang grpc server method.
Basically I have a grpc server (written in Go) that exposes three APIs:
SubmitJob
CancelJob
GetJobStatus
I am using Datadog to log metrics, so in each API, I have code like:
func (s *myServer) submitJob(ctx context.Context, request *submitJobRequest) (*submitJobResponse, error) {
s.dd_client.LogRequestCount("SubmitJob")
start_time := time.Now()
defer s.dd_client.LogRequestDuration("SubmitJob", time.Since(start_time))
sth, err:= someFunc1()
if err != nil {
s.dd_client.LogErrorCount("SubmitJob")
return nil, err
}
resp, err:= someFunc2(sth)
if err != nil {
s.dd_client.LogErrorCount("SubmitJob")
return nil, err
}
return resp, nil
}
This approach works but have several problems:
The LogRequestCount and LogRequestDuration is duplicated among all APIs
I am calling LogErrorCount in every places where errors are returned, which seems ugly
I learned that interceptor might help with logging, so I wrote an interceptor like
func (s *myServer) UnaryInterceptor(ctx context.Context,
request interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (interface{}, error) {
// Get method name e.g. SubmitJob, CancelJob, GetJobStatus
tmp := strings.Split(info.FullMethod, "/")
method := tmp[len(tmp)-1]
s.dd_client.LogRequestCount(method)
start_time := time.Now()
resp, err := handler(ctx, request)
server.dd_client.LogRequestDuration(method)
if err != nil {
s.dd_client.LogErrorCount(method)
}
return response, err
}
And set it in main() function:
server := grpc.NewServer(grpc.UnaryInterceptor(my_server.UnaryInterceptor))
This works for me, but I noticed two problems:
Here the interceptor takes myServer as a receiver, is this a good practice? I am doing this coz I want to reuse the Datadog client (dd_client) created within myServer. Other options would be create the Datadog client singleton which used by both interceptor and myServer, or create a interceptor struct and create a separate Datadog client there.
The interceptor could only handle logging for generic metrics e.g. request count, duration. But there could be metrics specific for each API, which means I still need to have logging related code in each API implementation. Then the question is, should I still use interceptor? Coz now the logging related code are splitted into two places (API implementation and interceptor).
I'd like to connect from Go to the running instance of the Memgraph database. I'm using Docker and I've installed the Memgraph Platform. What exactly do I need to do?
The procedure for connecting fro Go to Memgraph is rather simple. For this you need to use Bolt protocol. Here are the needed steps:
First, create a new directory for your app, /MyApp, and position yourself in it. Next, create a program.go file with the following code:
package main
import (
"fmt"
"github.com/neo4j/neo4j-go-driver/v4/neo4j"
)
func main() {
dbUri := "bolt://localhost:7687"
driver, err := neo4j.NewDriver(dbUri, neo4j.BasicAuth("username", "password", ""))
if err != nil {
panic(err)
}
// Handle driver lifetime based on your application lifetime requirements driver's lifetime is usually
// bound by the application lifetime, which usually implies one driver instance per application
defer driver.Close()
item, err := insertItem(driver)
if err != nil {
panic(err)
}
fmt.Printf("%v\n", item.Message)
}
func insertItem(driver neo4j.Driver) (*Item, error) {
// Sessions are short-lived, cheap to create and NOT thread safe. Typically create one or more sessions
// per request in your web application. Make sure to call Close on the session when done.
// For multi-database support, set sessionConfig.DatabaseName to requested database
// Session config will default to write mode, if only reads are to be used configure session for
// read mode.
session := driver.NewSession(neo4j.SessionConfig{})
defer session.Close()
result, err := session.WriteTransaction(createItemFn)
if err != nil {
return nil, err
}
return result.(*Item), nil
}
func createItemFn(tx neo4j.Transaction) (interface{}, error) {
records, err := tx.Run(
"CREATE (a:Greeting) SET a.message = $message RETURN 'Node ' + id(a) + ': ' + a.message",
map[string]interface{}{"message": "Hello, World!"})
// In face of driver native errors, make sure to return them directly.
// Depending on the error, the driver may try to execute the function again.
if err != nil {
return nil, err
}
record, err := records.Single()
if err != nil {
return nil, err
}
// You can also retrieve values by name, with e.g. `id, found := record.Get("n.id")`
return &Item{
Message: record.Values[0].(string),
}, nil
}
type Item struct {
Message string
}
Now, create a go.mod file using the go mod init example.com/hello command.
I've mentioned the Bolt driver earlier. You need to add it with go get github.com/neo4j/neo4j-go-driver/v4#v4.3.1. You can run your program with go run .\program.go.
The complete documentation is located at Memgraph site.
I've recently shifted from python to golang. I had been using python to work with GCP.
I used to pass in the scopes and mention the discovery client I wanted to create like this :
def get_client(scopes, api, version="v1"):
service_account_json = os.environ.get("SERVICE_ACCOUNT_KEY_JSON", None)
if service_account_json is None:
sys.exit("Exiting !!! No SSH_KEY_SERVICE_ACCOUNT env var found.")
credentials = service_account.Credentials.from_service_account_info(
json.loads(b64decode(service_account_json)), scopes=scopes
)
return discovery.build(api, version, credentials=credentials, cache_discovery=False)
And this would create my desired discovery client, whether it be compute engine service or sqladmin
However in go I don't seem to find this.
I found this : https://pkg.go.dev/google.golang.org/api/discovery/v1
For any client that I want to create I would've to import that and then create that, like this :
https://cloud.google.com/resource-manager/reference/rest/v1/projects/list#examples
package main
import (
"fmt"
"log"
"golang.org/x/net/context"
"golang.org/x/oauth2/google"
"google.golang.org/api/cloudresourcemanager/v1"
)
func main() {
ctx := context.Background()
c, err := google.DefaultClient(ctx, cloudresourcemanager.CloudPlatformScope)
if err != nil {
log.Fatal(err)
}
cloudresourcemanagerService, err := cloudresourcemanager.New(c)
if err != nil {
log.Fatal(err)
}
req := cloudresourcemanagerService.Projects.List()
if err := req.Pages(ctx, func(page *cloudresourcemanager.ListProjectsResponse) error {
for _, project := range page.Projects {
// TODO: Change code below to process each `project` resource:
fmt.Printf("%#v\n", project)
}
return nil
}); err != nil {
log.Fatal(err)
}
}
So I've to import each client library to get the client for that.
"google.golang.org/api/cloudresourcemanager/v1"
There's no dynamic creation of it.
Is it even possible, cause go is strict type checking 🤔
Thanks.
No, this is not possible with the Golang Google Cloud library.
You've nailed the point on the strict type checking, as it would definitely defeat the benefits of compile time type checking. It would also be a bad Golang practice to return different objects with different signatures, as we don't do duck typing and instead we rely on interface contracts.
Golang is boring and verbose, and it's like that by design :)
I'm setting up a tcp server in a pet project I'm writing in go. I want to be able to maintain a slice of all connected clients, and then modify it whenever a new client connects or disconnects from my server.
My main mental obstacle right now is whether I should be declaring a package level slice, or just passing a slice into my handler.
My first thought was to declare my ClientList slice (I'm aware that a slice might not be my best option here, but I've decided to leave it as is for now) as a package level variable. While I think this would work, I've seen a number of posts discouraging the use of them.
My other thought was to declare ClientList as a slice in my main function, and then I pass ClientList to my HandleClient function, so whenever a client connects/disconnects I can call AddClient or RemoveClient and pass this slice in and add/remove the appropriate client.
This implementation is seen below. There are definitely other issues with the code, but I'm stuck trying to wrap my head around something that seems like it should be very simple.
type Client struct {
Name string
Conn net.Conn
}
type ClientList []*Client
// Identify is used to set the name of the client
func (cl *Client) Identify() error {
// code here to set the client's name in the based on input from client
}
// This is not a threadsafe way to do this - need to use mutex/channels
func (cList *ClientList) AddClient(cl *Client) {
*cList = append(*cList, cl)
}
func (cl *Client) HandleClient(cList *ClientList) {
defer cl.Conn.Close()
cList.AddClient(cl)
err := cl.Identify()
if err != nil {
log.Println(err)
return
}
for {
err := cl.Conn.SetDeadline(time.Now().Add(20 * time.Second))
if err != nil {
log.Println(err)
return
}
cl.Conn.Write([]byte("What command would you like to perform?\n"))
netData, err := bufio.NewReader(cl.Conn).ReadString('\n')
if err != nil {
log.Println(err)
return
}
cmd := strings.TrimSpace(string(netData))
if cmd == "Ping" {
cl.Ping() //sends a pong msg back to client
} else {
cl.Conn.Write([]byte("Unsupported command at this time\n"))
}
}
}
func main() {
arguments := os.Args
PORT := ":" + arguments[1]
l, err := net.Listen("tcp4", PORT)
if err != nil {
fmt.Println(err)
return
}
defer l.Close()
fmt.Println("Listening...")
// Create a new slice to store pointers to clients
var cList ClientList
for {
c, err := l.Accept()
if err != nil {
log.Println(err)
return
}
// Create client cl1
cl1 := Client{Conn: c}
// Go and handle the client
go cl1.HandleClient(&cList)
}
}
From my initial testing, this appears to work. I am able to print out my client list and I can see that new clients are being added, and their name is being added after Identify() is called as well.
When I run it with the -race flag, I do get data race warnings, so I know I will need a threadsafe way to handle adding clients. The same goes for removing clients when I add that in.
Are there any other issues I might be missing by passing my ClientList into HandleClient, or any benefits I would gain from declaring ClientList as a package level variable instead?
Several problems with this approach.
First, your code contains a data race: each TCP connection is served by a separate goroutine, and they all attempt to modify the slice concurrently.
You might try building your code with go build -race (or go install -race — whatever you're using), and see it crash by the enabled runtime checks.
This one is easy to fix. The most straightforward approach is to add a mutex variable into the ClientList type:
type ClientList struct {
mu sync.Mutex
clients []*Client
}
…and make the type's methods hold the mutex while they're mutating the clients field, like this:
func (cList *ClientList) AddClient(cl *Client) {
cList.mu.Lock()
defer cList.mu.Unlock()
cList.clients = append(cList.clients, o)
}
(If you will ever encounter the typical usage pattern of your ClientList type is to frequently call methods which only read the contained list, you may start using the sync.RWLock type instead, which allows multiple concurrent readers.)
Second, I'd split the part which "identifies" a client out of the handler function.
As of now, in the handler, if the identification fails, the handler exits but the client is not delisted.
I'd say it would be better to identify it up front and only run the handler once the client is beleived to be okay.
Also it supposedly worth adding a deferred call to something like RemoveClient at the top of the handler's body so that the client is properly delisted when the handler is done with it.
IOW, I'd expect to see something like this:
func (cl *Client) HandleClient(cList *ClientList) {
defer cl.Conn.Close()
err := cl.Identify()
if err != nil {
log.Println(err)
return
}
cList.AddClient(cl)
defer cList.RemoveClient(cl)
// ... the rest of the code
}
I would like to ask how we should approach the issue with context propagation in Golang.
My application is an HTTP JSON API server.
I would use the context as a container of informative data (e.g. the request id, some things I unpack from requests or from the process).
One of the dumbest advantages is to vehicle data and tags useful for statistics and logging. E.g. Be able to add at each log line the transaction id in all the packages I own
The problem I'm facing is the following:
func handleActivityY(w http.ResponseWriter, r *http.Request) {
info, err := decodeRequest(r)
...
stepOne, err := stepOne(r.Context(), info)
...
stepTwo, err := stepTwo(r.Context(), stepOne)
...
}
The problem with this design is the fact the context is an immutable entity (each time we add something or we set a new timeout, we have a new context).
The context cannot be propagated except returning the context at each function call (together with the return value, if any and the error).
The only way to make this work would be:
func handleActivityY(w http.ResponseWriter, r *http.Request) {
ctx, info, err := decodeRequest(r)
...
ctx, stepOne, err := stepOne(ctx, info)
...
ctx, stepTwo, err := stepTwo(ctx, stepOne)
...
}
I've already polluted almost any function in my packages with the context.Context parameter. Returning it in addition to other parameters seems to me overkill.
Is there really no other more elegant way to do so?
I am currently using the gin framework, which has its own context and it is mutable. I don't want to add the dependency to Gin for that.
Early in your context pipeline, add a mutable pointer to a data struct:
type MyData struct {
// whatever you need
}
var MyDataKey = /* something */
ctx, cancel := context.WithValue(context.Background(), MyDataKey, &MyData{})
Then in your methods that need to modify your data structure, just do so:
data := ctx.Value(MyDataKey)
data.Foo = /* something */
All normal rules about concurrent access safety apply, so you may need to use mutexes or other protection mechanisms if multiple goroutines can read/set your data value simultaneously.
Is there really no other more elegant way to do so?
stepOne could return it's own data independent of context and isolated from how the caller may use its information (ie put it in a databag/context and pass it to other functions)
func handleActivityY(w http.ResponseWriter, r *http.Request) {
ctx, info, err := decodeRequest(r)
...
stepOne, err := stepOne(ctx, info)
...
ctx = context.WithValue(ctx, "someContextStepTwoNeeds", stepOne.Something())
stepTwo, err := stepTwo(ctx, stepOne)
...
}
IMO data being passed request scoped should be extremely minimal contextual information