Datastore: Create parent and child entity in an entity group transaction? - go

After reading about Google Datastore concepts/theory I started using the Go datastore package
Scenario:
Kinds User and LinkedAccount require that every user has one or more linked accounts (yay 3rd party login). For strong consistency, LinkedAccounts will be children of the associated User. New User creation then involves creating both a User and a LinkedAccount, never just one.
User creation seems like the perfect use case for transactions. If, say LinkedAccount creation fails, the transaction rolls back an fails. This doesn't currently seem possible. The goal is to create a parent and then a child within a transaction.
According to docs
All Datastore operations in a transaction must operate on entities in
the same entity group if the transaction is a single group transaction
We want a new User and LinkedAccount to be in the same group, so to me it sounds like Datastore should support this scenario. My fear is that the intended meaning is that operations on existing entities in the same group can be performed in a single transaction.
tx, err := datastore.NewTransaction(ctx)
if err != nil {
return err
}
incompleteUserKey := datastore.NewIncompleteKey(ctx, "User", nil)
pendingKey, err := tx.Put(incompleteUserKey, user)
if err != nil {
return err
}
incompleteLinkedAccountKey := datastore.NewIncompleteKey(ctx, "GithubAccount", incompleteUserKey)
// also tried PendingKey as parent, but its a separate struct type
_, err = tx.Put(incompleteLinkedAccountKey, linkedAccount)
if err != nil {
return err
}
// attempt to commit
if _, err := tx.Commit(); err != nil {
return err
}
return nil
From the library source its clear why this doesn't work. PendingKey's aren't keys and incomplete keys can't be used as parents.
Is this a necessary limitation of Datastore or of the library? For those experienced with this type of requirement, did you just sacrifice the strong consistency and make both kinds global?
For Google-ability:
datastore: invalid key
datastore: cannot use pendingKey as type *"google.golang.org/cloud/datastore".Key

One thing to note is that transactions in the Cloud Datastore API can operate on up to 25 entity groups, but this doesn't answer the question of how to create two entities in the same entity group as part of a single transaction.
There are a few ways to approach this (note that this applies to any use of the Cloud Datastore API, not just the gcloud-golang library):
Use a (string) name for the parent key instead of having Datastore automatically assign a numeric ID:
parentKey := datastore.NewKey(ctx, "Parent", "parent-name", 0, nil)
childKey := datastore.NewIncompleteKey(ctx, "Child", parentKey)
Make an explicit call to AllocateIds to have the Datastore pick a numeric ID for the parent key:
incompleteKeys := [1]*datastore.Key{datastore.NewIncompleteKey(ctx, "Parent", nil)}
completeKeys, err := datastore.AllocateIDs(ctx, incompleteKeys)
if err != nil {
// ...
}
parentKey := completeKeys[0]
childKey := datastore.NewIncompleteKey(ctx, "Child", parentKey)

Related

Instantiating an object in a file VS in a struct

I was reading this blog recently and I saw something interesting. The object instance is initialized in the file itself and then accessed everywhere. I found it pretty convenient and was wondering if it's the best practice.
https://dev.to/hackmamba/build-a-rest-api-with-golang-and-mongodb-gin-gonic-version-269m#:~:text=setup.go%20file%20and%20add%20the-,snippet%20below,-%3A
I'm more used to a pattern where we first create a struct like so:
type Server struct {
config util.Config
store db.Store
tokenMaker token.Maker
router *gin.Engine
}
and then set eveything in main:
func NewServer(config util.Config, store db.Store) (*Server, error) {
tokenMaker, err := token.NewPasetoMaker(config.TokenSymmetricKey)
if err != nil {
return nil, fmt.Errorf("cannot create token maker: %w", err)
}
server := &Server{
config: config,
store: store,
tokenMaker: tokenMaker,
}
server.setupRouter()
return server, nil
}
and then the server object is passed every where.
What's best? Is it okay to use the pattern mentioned in that blog?
Thank you.
I tried to implement both patterns, The pattern mentioned in the blog seems very convenient to use as I'm not passing around objects and can easily access object I'm interested in.
You can follow any one of those patterns. But, I think it's better to pass the object pointer everywhere necessary. It saves lots of work and ensures that the object is always updated.

How to do integration test for a service that depends on another service in a microservice environment?

I am building a microservice app, and currently writing some tests. The function that I am testing is below where it's owned by cart service and tries to get all cart items and append the item details with other details of each item from catalog service.
func (s *Server) Grpc_GetCartItems(ctx context.Context, in *pb.GetCartItemsRequest) (*pb.ItemsResponse, error) {
// Get product ids and its quantity in cart by userId
res, err := s.Repo.GetCartItems(ctx, in.UserId)
if err != nil {
return nil, err
}
// Return empty response if there is no items in cart
if len(res) == 0 {
return &pb.ItemsResponse{}, nil
}
// Get Product ID Keys from map
ids := GetMapKeys(res)
// RPC call catalog server to get cart products' names
products, err := s.CatalogClient.Grpc_GetProductsByIds(ctx, &catalogpb.GetProductsByIdsRequest{ProductIds: ids})
if err != nil{
return nil, err
}
// Return response in format product id, product name, and qty in cart
items, err := AppendItemToResponse(products, res)
if err != nil{
return nil, err
}
return items, nil
}
The problem is for the test setup, I need to seed some test data to both of the cart and catalog repositories. I can do that with cart repo just fine, but for the catalog is it a common practice to just mock the dependency s.CatalogClient.Grpc_GetProductsByIds instead? I am still new to testing, and from what I understand you generally don't do mocking in integration tests, but I am not sure if there's a better way to tackle this kind of issue.
You're correct in that for an integration test you would not mock a service.
Usually, if it is a service you do not have control over, you would stub the service.
Integration tests can be run against staging or testing services (in an E2E capacity) or in a virtual environment (like compose, K8S, etc.).
I think for your requirement, I would stage it using docker-compose or something similar. If you intend to go for an E2E setup in the future, you may want to look into having a testing environment.
See: https://www.testenvironmentmanagement.com/types-of-testing-environments/
Obligatory "you should not be calling one microservice directly from another" comment. While you can find a way to make testing work, you've tightly coupled the architecture. This (testing) concern is only the first of what will become many since your cart service directly ties to your catalog service. If you fix you close-coupled architecture problem, your testing problem will also be resolved.

Go http client setup for multiple endpoints?

I reuse the http client connection to make external calls to a single endpoint. An excerpt of the program is shown below:
var AppCon MyApp
func New(user, pass string, platformURL *url.URL, restContext string) (*MyApp, error) {
if AppCon == (MyApp{}) {
AppCon = MyApp{
user: user,
password: pass,
URL: platformURL,
Client: &http.Client{Timeout: 30 * time.Second},
RESTContext: restContext,
}
cj, err := cookiejar.New(nil)
if err != nil {
return &AppCon, err
}
AppCon.cookie = cj
}
return &AppCon, nil
}
// This is an example only. There are many more functions which accept *MyApp as a pointer.
func(ma *MyApp) GetUser(name string) (string, error){
// Return user
}
func main(){
for {
// Get messages from a queue
// The message returned from the queue provide info on which methods to call
// 'm' is a struct with message metadata
c, err := New(m.un, m.pass, m.url)
go func(){
// Do something i.e c.GetUser("123456")
}()
}
}
I now have the requirement to set up a client connections with different endpoints/credentials received via queue messages.
The problem I foresee is I can't just simply modify AppCon with the new endpoint details since a pointer to MyApp is returned, resulting in resetting c. This can impact a goroutine making a HTTP call to an unintended endpoint. To make matters non trivial, the program is not meant to have awareness of the endpoints (I was considering using a switch statement) but rather receive what it needs via queue messages.
Given the issues I've called out are correct, are there any recommendations on how to solve it?
EDIT 1
Based on the feedback provided, I am inclined to believe this will solve my problem:
Remove the use of a Singleton of MyApp
Decouple the http client from MyApp which will enable it for reuse
var httpClient *http.Client
func New(user, pass string, platformURL *url.URL, restContext string) (*MyApp, error) {
AppCon = MyApp{
user: user,
password: pass,
URL: platformURL,
Client: func() *http.Client {
if httpClient == nil {
httpClient = &http.Client{Timeout: 30 * time.Second}
}
return httpClient
}()
RESTContext: restContext,
}
return &AppCon, nil
}
// This is an example only. There are many more functions which accept *MyApp as a pointer.
func(ma *MyApp) GetUser(name string) (string, error){
// Return user
}
func main(){
for {
// Get messages from a queue
// The message returned from the queue provide info on which methods to call
// 'm' is a struct with message metadata
c, err := New(m.un, m.pass, m.url)
// Must pass a reference
go func(c *MyApp){
// Do something i.e c.GetUser("123456")
}(c)
}
}
Disclaimer: this is not a direct answer to your question but rather an attempt to direct you to a proper way of solving your problem.
Try to avoid a singleton pattern for you MyApp. In addition, New is misleading, it doesn't actually create a new object every time. Instead you could be creating a new instance every time, while preserving the http client connection.
Don't use constructions like this: AppCon == (MyApp{}), one day you will shoot in your leg doing this. Use instead a pointer and compare it to nil.
Avoid race conditions. In your code you start a goroutine and immediately proceed to the new iteration of the for loop. Considering you re-use the whole MyApp instance, you essentially introduce a race condition.
Using cookies, you make your connection kinda stateful, but your task seems to require stateless connections. There might be something wrong in such an approach.

Should database connections be opened and closed in every CRUD method?

I am using GORM ORM for Postgres access in a Go application.
I have got 4 functions, Create, Update, Delete, and Read in a database repository.
In each of these functions, I open a database connection, perform a CRUD operation and then close the connection just after the operation is performed as you will see here and here and in the code snippet below, using GORM
func (e *Example) Create(m *model.Example) (*model.Example, error) {
// open a database session
dbSession, err := e.OpenDB() //gorm.Open("postgres", connStr)
if err != nil {
log.Log(err)
return nil, err
}
// close database connection after operation is completed
defer dbSession.Close()
// create item
db := dbSession.Create(m)
if db.Error != nil {
return nil, db.Error
}
return m, nil
}
Is that the correct practice or should one database connection be shared in the whole application and let the ORM handle managing connections? as stated here?
You should reuse a DB connection as much as you can. Also gorm has a built-in connection pool, so, you don't need to manage the db handle. Simply share it amongst all goroutines and they can share the handle safely, allocating new connections as needed.

Revel - Storing an object in session

I'm using the oauth package "code.google.com/p/goauth2/oauth" with revel and the it creates a few structures with quite a bit of information in it. I need this information to be persistent throughout the session but sessions can only be type string. Is there a better way of doing this than the following?
c.Session["AccessToken"] = t.Token.AccessToken
c.Session["RefreshToken"] = t.Token.RefreshToken
...
If not how do I reassign the strings to create another structure to call Client.Get() ?
You can use the json package to "convert" structs to string and vice versa. Just know that only exported fields are serialized this way.
Since oauth.Token has only exported fields, this will work:
if data, err := json.Marshal(t.Token); err == nil {
c.Session["Token"] = string(data)
} else {
panic(err)
}
And this is how you can reconstruct the token from the session:
if err := json.Unmarshal([]byte(c.Session["Token"]), &t.Token); err != nil {
panic(err)
}
Instead of that you can try to save some string ID to Session and the object you need to Cache:
c.Session["id"] = id
go cache.Set("token_"+id, t.Token, 30*time.Minute)
And then use it as follows:
var token oauth.Token
id := c.Session["id"]
if err := cache.Get("token_"+id, &token); err != nil {
// TODO: token not found, do something
}
// TODO: use your token here...
The advantage of this approach is you do not have to work with json package explicitly and cache has a few different back-ends out of the box: memory, redis, memcached. Moreover, you do not have a limitation of 4K as in case of Cookie based Session.
https://revel.github.io/manual/cache.html

Resources