How to design RestAPI for too many tables in Golang - go

I think if i keep using the method below, i'll have to write too much code.
I declared structures for all the tables. and i used the go validate package for validation.
[types.go]
type TableA struct {
Field1 string `json:"field1" validate:"required, max=10"`
Field2 int `json:"field2" validate:"number"`
}
type TableB struct {
...
}
And i initialized the router for each method and connected the handlers.
[tableA.go]
router.Get("/table-a", r.Get_tableA_Handler),
router.Post("/table-a", r.Post_tableA_Handler),
router.Patch("/table-a", r.Patch_tableA_Handler),
router.Delete("/table-a", r.Delete_tableA_Handler)
...
Each handler parses the json in the request body, validates the data and call the db function.
[tableA_router.go]
func (rt *tableARouter) Post_tableA_Handler(w http.ResponseWriter, r *http.Request) error {
//Json to Struct
req := new(types.tableA)
if err := httputils.DecodeJsonBody(r, req); err != nil {
return err
}
// Validation
if err := validCheck(req); err != nil {
return err
}
// DB function
err := rt.insert_tableA_DB(r.Context(), req)
if err != nil {
return err
}
return rt.rd.JSON(w, http.StatusCreated, "Create Success")
}
...
func validCheck(data interface{}) error {
validate := validator.New()
err := validate.Struct(data)
return err
}
This is a DB function called from the handler function above (using Gorm)
[tableA_db.go]
func (rt *tableARouter) insert_tableA_DB(ctx context.Context, data *types.TableA) error {
// DB Connect
db, err := db.Open(rt.dbcfg)
if err != nil {
return err
}
defer db.Close()
tx := db.Begin()
defer tx.Rollback()
// == INSERT ==
query := `INSERT INTO table_a
(field1, field2, ...)
VALUES (?, ?, ...)`
result := tx.WithContext(ctx).Exec(query,
data.Field1, data.Field2, ...)
//Result
if result.Error != nil {
...
}
There are too many tables now... If there are 100 tables i have to write 100 handlers and 100 DB functions.
Is there any way to use something like /tables/{tableName}?
Please give me any advice.... Thank you.

You can use an ORM package, like GORM to make easier your work.
Or you can make an universal handler and with the reflect package, analyze your defined structs and make every SQL query dinamically. But it's not the best solution if any of your struct has inner slices, other embedded structs, or if you need to use joined tables you also have to deal with it manually. I have servers where we have more than 200 endpoints with more than 3-400 methods with 200+ SQL tables and the whole server was written by hand. But I can say, it's very rare when a handler and the DB func can be reused without modifying.
Maybe you can wrap the error handling, rollback/commit, json parse and response parts in a func then use it to call the DB methods.

Related

How to write table data into a nested struct in Go

Summary
I am trying to write data from several postgres tables into a nested Go struct for the purpose of returning a single json response to a GET request within my web app.
Questions
Is the way I'm declaring a nested struct reasonable from a Go best practices perspective, or is there a reason I should avoid this method and do it another way?
What am I doing wrong in Step 3 to prevent my code from working? (I fear the answer is 'everything')
What I've got so far
I've declared my struct of structs
type MainObject struct {
SubObjects []struct {
SpecificDetail string `json:"specific-detail"`
} `json:"sub-object"`
...(other []structs)...
}
I've retrieved rows from the tables
func getMainObjectHandler(w http.ResponseWriter, r *http.Request) {
...(database connnection)...
MainObjectID := r.URL.Query().Get("moid")
if MainObjectID != "null" {
NewMainObject := MainObject{}
SubObjectDetail_rows, err := db.Query("SELECT specific_detail from the_table WHERE moid= '" + MainObjectID + "'")
if err != nil {
log.Fatalf("could not execute query: %v", err)
}
...(other db.Query rows)...
I've tried (and failed) to build the row data into the struct.
for SubObjectDetail_rows.Next() {
SpecificDetail := NewMainObject.SubObject.SpecificDetail{}
SubObjectDetail_rows.Scan(&SpecificDetail)
SubObject = append(SubObject, SpecificDetail)
}
NewMainObject = append(MainObject, SubObject)
defer persona_rows.Close()
Finally, I've set up Marshal and write.
NMOListBytes, err := json.Marshal(NewMainObject)
if err != nil {
fmt.Println(fmt.Errorf("Error: %v", err))
w.WriteHeader(http.StatusInternalServerError)
return
}
w.Write(NMOListBytes)
Firstly, please use placeholders when creating your SQL query to avoid injections:
// db.Query("SELECT specific_detail from the_table WHERE moid= '" + MainObjectID + "'") // not this
db.Query("SELECT specific_detail from the_table WHERE moid=?", MainObjectID)
Unless you're using a framework like GORM you can't Scan() into a single struct value.
From the docs:
Scan copies the columns in the current row into the values pointed at
by dest. The number of values in dest must be the same as the number
of columns in Rows.
It looks like you're pulling JSON from a DB query, as you're only querying one column, so you probably want:
var bs []byte // get raw JSON bytes
err = SubObjectDetail_rows.Scan(&bs)
if err != nil { /* always check errors */ }
and then unmarshal them to your struct:
err = json.Unmarshal(bs, &SpecificDetail)
if err != nil { /* ... */ }

How to serialize `LastEvaluatedKey` from DynamoDB's Golang SDK?

When working with DynamoDB in Golang, if a call to query has more results, it will set LastEvaluatedKey on the QueryOutput, which you can then pass in to your next call to query as ExclusiveStartKey to pick up where you left off.
This works great when the values stay in Golang. However, I am writing a paginated API endpoint, so I would like to serialize this key so I can hand it back to the client as a pagination token. Something like this, where something is the magic package that does what I want:
type GetDomainObjectsResponse struct {
Items []MyDomainObject `json:"items"`
NextToken string `json:"next_token"`
}
func GetDomainObjects(w http.ResponseWriter, req *http.Request) {
// ... parse query params, set up dynamoIn ...
dynamoIn.ExclusiveStartKey = something.Decode(params.NextToken)
dynamoOut, _ := db.Query(dynamoIn)
response := GetDomainObjectsResponse{}
dynamodbattribute.UnmarshalListOfMaps(dynamoOut.Items, &response.Items)
response.NextToken := something.Encode(dynamoOut.LastEvaluatedKey)
// ... marshal and write the response ...
}
(please forgive any typos in the above, it's a toy version of the code I whipped up quickly to isolate the issue)
Because I'll need to support several endpoints with different search patterns, I would love a way to generate pagination tokens that doesn't depend on the specific search key.
The trouble is, I haven't found a clean and generic way to serialize the LastEvaluatedKey. You can marshal it directly to JSON (and then e.g. base64 encode it to get a token), but doing so is not reversible. LastEvaluatedKey is a map[string]types.AttributeValue, and types.AttributeValue is an interface, so while the json encoder can read it, it can't write it.
For example, the following code panics with panic: json: cannot unmarshal object into Go value of type types.AttributeValue.
lastEvaluatedKey := map[string]types.AttributeValue{
"year": &types.AttributeValueMemberN{Value: "1993"},
"title": &types.AttributeValueMemberS{Value: "Benny & Joon"},
}
bytes, err := json.Marshal(lastEvaluatedKey)
if err != nil {
panic(err)
}
decoded := map[string]types.AttributeValue{}
err = json.Unmarshal(bytes, &decoded)
if err != nil {
panic(err)
}
What I would love would be a way to use the DynamoDB-flavored JSON directly, like what you get when you run aws dynamodb query on the CLI. Unfortunately the golang SDK doesn't support this.
I suppose I could write my own serializer / deserializer for the AttributeValue types, but that's more effort than this project deserves.
Has anyone found a generic way to do this?
OK, I figured something out.
type GetDomainObjectsResponse struct {
Items []MyDomainObject `json:"items"`
NextToken string `json:"next_token"`
}
func GetDomainObjects(w http.ResponseWriter, req *http.Request) {
// ... parse query params, set up dynamoIn ...
eskMap := map[string]string{}
json.Unmarshal(params.NextToken, &eskMap)
esk, _ = dynamodbattribute.MarshalMap(eskMap)
dynamoIn.ExclusiveStartKey = esk
dynamoOut, _ := db.Query(dynamoIn)
response := GetDomainObjectsResponse{}
dynamodbattribute.UnmarshalListOfMaps(dynamoOut.Items, &response.Items)
lek := map[string]string{}
dynamodbattribute.UnmarshalMap(dynamoOut.LastEvaluatedKey, &lek)
response.NextToken := json.Marshal(lek)
// ... marshal and write the response ...
}
(again this is my real solution hastily transferred back to the toy problem, so please forgive any typos)
As #buraksurdar pointed out, attributevalue.Unmarshal takes an inteface{}. Turns out in addition to a concrete type, you can pass in a map[string]string, and it just works.
I believe this will NOT work if the AttributeValue is not flat, so this isn't a general solution [citation needed]. But my understanding is the LastEvaluatedKey returned from a call to Query will always be flat, so it works for this usecase.
Inspired by Dan, here is a solution to serialize and deserialize to/from base64
package dynamodb_helpers
import (
"encoding/base64"
"encoding/json"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
)
func Serialize(input map[string]types.AttributeValue) (*string, error) {
var inputMap map[string]interface{}
err := attributevalue.UnmarshalMap(input, &inputMap)
if err != nil {
return nil, err
}
bytesJSON, err := json.Marshal(inputMap)
if err != nil {
return nil, err
}
output := base64.StdEncoding.EncodeToString(bytesJSON)
return &output, nil
}
func Deserialize(input string) (map[string]types.AttributeValue, error) {
bytesJSON, err := base64.StdEncoding.DecodeString(input)
if err != nil {
return nil, err
}
outputJSON := map[string]interface{}{}
err = json.Unmarshal(bytesJSON, &outputJSON)
if err != nil {
return nil, err
}
return attributevalue.MarshalMap(outputJSON)
}

Golang Standard Package Structure

I am faily new to Go and I am trying to create a structured application using guidance from Ben Johnson's webpage. Unfortunately, his example is not a complete working application.
His webpage is https://medium.com/#benbjohnson/standard-package-layout-7cdbc8391fc1
I have tried to use his methods and I keep getting "Undefined: db" error. It doesn't tell me what line is causing the error, just the file "MSSQL.go"
Could someone help with guidance to help me fix this error?
Edited code with accepted solution.
StatementPrinter.go
package statementprinter
type Statement struct {
CustomerId string
CustomerName string
}
type StatementService interface {
Statement(id string) (*Statement, error)
}
main.go
package main
import (
"fmt"
"log"
"github.com/ybenjolin/StatementPrinter"
"github.com/ybenjolin/StatementPrinter/mssql"
"database/sql"
_ "github.com/alexbrainman/odbc"
)
const DB_INFO = "Driver={SQL Server};Server=cdc-edb2;Database=CostarReports;Trusted_Connection=yes;"
var db *sql.DB
func init() {
var err error
db, err = sql.Open("odbc", DB_INFO)
if err != nil {
log.Fatal("Error opening database connection.\n", err.Error())
}
err = db.Ping()
if err != nil {
log.Fatal("Error pinging database server.\n", err.Error())
}
fmt.Println("Database connection established.")
}
func main () {
var err error
defer db.Close()
// Create services
// Changes required here. Was ss := &statementprinter.Stat..
ss := &mssql.StatementService{DB: db}
// Use service
var s *statementprinter.Statement
s, err = ss.Statement("101583")
if err != nil {
log.Fatal("Query failed:", err.Error())
}
fmt.Printf("Statement: %+v\n", s)
}
mssql.go
package mssql
import (
_ "github.com/alexbrainman/odbc"
"database/sql"
"github.com/ybenjolin/StatementPrinter"
)
// StatementService represents a MSSQL implementation of statemenetprinter.StatementService.
type StatementService struct {
DB *sql.DB
}
// Statement returns a statement for a given customer.
func (s *StatementService) Statement(customer string) (*statementprinter.Statement, error) {
var err error
var t statementprinter.Statement
// Changes required here. Was row := db.Query......
row := s.DB.QueryRow(`Select Customer, CustomerName From AccountsReceivable.rptfARStatementHeader(?)`, customer)
if row.Scan(&t.CustomerId, &t.CustomerName); err != nil {
return nil, err
}
return &t, nil
This seems like it's just a typo. It seems like the problematic line is in the method
func (s *StatementService) Statement(customer string)
in mssql.go,
row := db.QueryRow(`Select Customer, CustomerName From AccountsReceivable.rptfARStatementHeader(?)`, customer)
QueryRow is supposed to be a method of db, but db is not defined. However, in the struct
type StatementService struct {
DB *sql.DB
}
there's a *sql.DB instance. The method you're using has a *StatementService parameter, s. So, my guess is the intention would be to access the sql.DB field in s like so
func (s *StatementService) Statement(customer string) (*statementprinter.Statement, error) {
var err error
var t statementprinter.Statement
//CHANGED LINE:
row := s.DB.QueryRow(`Select Customer, CustomerName From AccountsReceivable.rptfARStatementHeader(?)`, customer)
if row.Scan(&t.CustomerId, &t.CustomerName); err != nil {
return nil, err
}
return &t, nil
Then, the method is called in main.go, and is passed a StatementService instance that contains a database:
ss := &statementprinter.StatementService{DB: db}
I believe you need to change this line to
ss := &mssql.StatementService{DB: db}
becuase that's the actual interface implementation. The line you have now treats the StatementService interface like a struct which will not compile.
The global db in main.go lives for the lifetime of the application. It's just a pointer which is copied around for use.

Would I be better by using a prepared statement here?

I still struggle understanding the benefits of prepared statement in Go / psql.
Let's assume I have a struct
type Brand struct {
Id int `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Issued_at *time.Time `json:"issued_at,omitempty"`
}
And some table brands, where id is a unique field. Now I want to retrieve the element from that table using and id.
I can write the following function using QueryRow.
func GetBrand1(id int) (Brand, error) {
brand := Brand{}
if err := Db.QueryRow("SELECT name, issued_at FROM brands WHERE id = $1", id).Scan(&brand.Name, &brand.Issued_at); err != nil {
if err == sql.ErrNoRows {
return brand, nil
}
return brand, err
}
brand.Id = id
return brand, nil
}
and I can do the same (I hope it is the same) using prepared statement:
func GetBrand2(id int) (Brand, error) {
brand := Brand{}
stmt, err := Db.Prepare("SELECT name, issued_at FROM brands WHERE id = $1")
if err != nil {
return brand, err
}
defer stmt.Close()
rows, err := stmt.Query(id)
if err != nil {
return brand, err
}
defer rows.Close()
for rows.Next() {
rows.Scan(&brand.Name, &brand.Issued_at)
brand.Id = id
return brand, err
}
if err = rows.Err(); err != nil {
return brand, err
}
return brand, err
}
Now in my application I am planning to execute GetBrand* function many times (with different parameters). Will is one of this implementations is more preferable to another (in terms of sql-requests/memory/anything). Or may be they both suck and I would be better doing something else.
I have read this and a followed up link and I saw that:
db.Query() actually prepares, executes, and closes a prepared
statement. That’s three round-trips to the database. If you’re not
careful, you can triple the number of database interactions your
application makes
but I think that prepared statement in the second case will be removed at the end of the function.
In both of those examples, there's roughly the same database overhead. If you're going to use a statement a lot, prepare it once in a wider scope so it's reusable.
You would only be making one round trip to the database with that pattern.
If you're ever using databases in conjunction with user input, you should always prepare the statement beforehand.
If not, you run a risk of DB Insertion (SQL Insertion ex).

Golang repostiory pattern

I try to implement repository pattern in Go app (simple web service) and try to find better way to escape code duplication.
Here is a code
Interfaces are:
type IRoleRepository interface {
GetAll() ([]Role, error)
}
type ISaleChannelRepository interface {
GetAll() ([]SaleChannel, error)
}
And implementation:
func (r *RoleRepository) GetAll() ([]Role, error) {
var result []Role
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT Id,Name FROM Position")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(Role)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
func (r *SaleChannelRepository) GetAll() ([]SaleChannel, error) {
var result []SaleChannel
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT DISTINCT SaleChannel 'Name' FROM Employee")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(SaleChannel)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
As you can see differences are in a few words. I try to find something like Generics from C#, but didnt find.
Can anyone help me?
No, Go does not have generics and won't have them in the forseeable future¹.
You have three options:
Refactor your code so that you have a single function which accepts an SQL statement and another function, and:
Queries the DB with the provided statement.
Iterates over the result's rows.
For each row, calls the provided function whose task is to
scan the row.
In this case, you'll have a single generic "querying" function,
and the differences will be solely in "scanning" functions.
Several variations on this are possible but I suspect you have the idea.
Use the sqlx package which basically is to SQL-driven databases what encoding/json is to JSON data streams: it uses reflection on your types to create and execute SQL to populate them.
This way you'll get reusability on another level: you simply won't write boilerplate code.
Use code generation which is the Go-native way of having "code templates" (that's what generics are about).
This way, you (usually) write a Go program which takes some input (in whatever format you wish), reads it and writes out one or more files which contain Go code, which is then compiled.
In your, very simple, case, you can start with a template of your Go function and some sort of a table which maps SQL statement to the types to create from the data selected.
I'd note that your code indeed looks woefully unidiomatic.
No one in their right mind implements "repository patterns" in Go, but that's sort of okay so long it keeps you happy—we all are indoctrinated to a certain degree with the languages/environments we're accustomed to,—but your connection := r.provider.GetConnection() looks alarming: the Go's database/sql is drastically different from "popular" environments and frameworks so I'd highly recommend to start with this and this.
¹ (Update as of 2021-05-31) Go will have generics as the proposal to implement them has been accepted and the work implementing them is in progress.
Forgive me if I'm misunderstanding, but a better pattern might be something like the following:
type RepositoryItem interface {
Name() string // for example
}
type Repository interface {
GetAll() ([]RepositoryItem, error)
}
At the moment, you essentially have multiple interfaces for each type of repository, so unless you're going to implement multiple types of RoleRepository, you might as well not have the interface.
Having generic Repository and RepositoryItem interfaces might make your code more extensible (not to mention easier to test) in the long run.
A contrived example might be (if we assume a Repository vaguely correlates to a backend) implementations such as MySQLRepository and MongoDBRepository. By abstracting the functionality of the repository, you're protecting against future mutations.
I would very much advise seeing #kostix's answer also, though.
interface{} is the "generic type" in Go. I can imagine doing something like this:
package main
import "fmt"
type IRole struct {
RoleId uint
}
type ISaleChannel struct {
Profitable bool
}
type GenericRepo interface{
GetAll([]interface{})
}
// conceptual repo to store all Roles and SaleChannels
type Repo struct {
IRoles []IRole
ISaleChannels []ISaleChannel
}
func (r *Repo) GetAll(ifs []interface{}) {
// database implementation here before type switch
for _, v := range ifs {
switch v := v.(type) {
default:
fmt.Printf("unexpected type %T\n", v)
case IRole:
fmt.Printf("Role %t\n", v)
r.IRoles = append(r.IRoles, v)
case ISaleChannel:
fmt.Printf("SaleChannel %d\n", v)
r.ISaleChannels = append(r.ISaleChannels, v)
}
}
}
func main() {
getter := new(Repo)
// mock slice
data := []interface{}{
IRole{1},
IRole{2},
IRole{3},
ISaleChannel{true},
ISaleChannel{false},
IRole{4},
}
getter.GetAll(data)
fmt.Println("IRoles: ", getter.IRoles)
fmt.Println("ISaleChannels: ", getter.ISales)
}
This way you don't have to end up with two structs and/or interfaces for IRole and ISale

Resources