I have a web application written in Go with multiple modules, one deals with all database related things, one deals with reports, one consists all web services, one for just business logic and data integrity validation and several others. So, I have numerous methods, functions have been covered by these modules.
Now, the requirement is to use session in web service as well as we need to use transaction in some APIs. The first approach came to my mind is to change the signature of the existing methods to support session, transaction (*sql.Tx) (which is a painful task, but have to do in anyways!). Now, I'm afraid actually what if something will come in future that needs to be passed through all these methods and then should I have to go through this cycle again to change the method signature again? This does not seem to be a good approach.
Later, I found that context.Context might be a good approach (well, you can suggest other approaches too, apart from this!) that for every method call, just pass context parameter at first argument place in a method call hence I've to change methods signature only one time. If I go with this approach, can anyone tell me how would I set/pass multiple keys (session, sql.Tx) in that context object?
(AFAIK, context.Context provides WithValue method, but can I use it for multiple keys? How would I set a key in the nested function call, is that even possible?)
Actually, this question has two questions:
Should I consider context.Context for my solution? If not, give me a light on another approach.
How do I set multiple keys and values in context.Context?
For your second question you can group all your key/values in struct as follows:
type vars struct {
lock sync.Mutex
db *sql.DB
}
Then you can add this struct in context:
ctx := context.WithValue(context.Background(), "values", vars{lock: mylock, db: mydb})
And you can retrieve it:
ctxVars, ok := r.Context().Value("values").(vars)
if !ok {
log.Println(err)
return err
}
db := ctxVars.db
lock := ctxVars.lock
I hope it helps you.
Finally, I decided to go with context package solution, after studying the articles from the Go context experience reports. And especially I found Dave Cheney's article helpful.
Well, I can make my custom solution for context as gorilla (Ah, somewhat!). But as Go already have a solution for this, I would go with context package.
Right now, I only need session and database transaction in each method to support transaction if began and user authentication, authorization.
It might be overhead, of having context.Context in each method of the application cause I don't need cancellation, deadline, timeout functionality at the moment but it could be helpful in future.
Related
I'm implementing a Cadence Workflow that needs to call functions with context.Context parameters. How do I go about getting a context.Context from the workflow.Context? Is it just a matter of ctx.(*context.Context)?
It is not context.Context.
You should never write any workflow code that uses context.Context at all. All the calls that needs context.Context should be written within workflow activity or local activity for determinism.
In other words, Workflow code should only contain logic to orchestrate/manage other workflow entities like activities/childWF/Signal/etc.
workflow.Context is a special data structure for worker to pass in workflow run-time information during workflow execution. For example, workflowID and runID. It happens to call Context just because this looks very similar with Golang style. Other than that, it has nothing directly related to context.Context.
In Java client, there is no workflow.Context and the way that worker pass through these data is via ThreadLocal.
If you really want to pass through some KV data from external to workflow code, you can use context propagation: https://github.com/uber-common/cadence-samples/tree/master/cmd/samples/recipes/ctxpropagation
Up until now I have been handling authorization in the CommandHandlers.
An example is I have an aggregate "Team" containing a list of managers (AggregateIdentifier from a User). All command handlers in the Team aggregate then verify the user executing the command is manager of the team.
The userId is injected as metadata in a CommandHandlerInterceptor based on the SecurityContext.
My main concern is, when I use sagas, it becomes an additional overhead to maintain the user context across the commands issued against different aggregates. Aside from that, the manager association can expire in the period the saga is running and subsequent failing commands, leading to an incomplete state which also needs to be handled with some rollback functionality.
Is it better to do the authorization in my controller layer to avoid the additional overhead or should I see it more as good practice to let my CommandHandlers decide whether the command is valid for the aggregate?
Authorization to perform certain operations/commands is something which I'd argue isn't domain specific logic. Instead, it is more a form of cross cutting concern which you need throughout your application. Thus, placing it in the #CommandHandler annotated method is not the ideal place in my head. However, placing it close by makes a lot of sense.
You have pointed out you are already using a CommandHandlerInterceptor to populate the Spring SecurityContext, thus I am assuming you are using a CommandDispatchInterceptor to populate the command's MetaData with information when you send a command out. This is a great use of the interceptor logic indeed, so I'd keep that in place. This however set's the information, it doesn't validate it.
To that end, you could build your own Handler Enhancer, which validates security metadata on a command. You could even build a dedicated annotation you'd add next to the #CommandHandler annotation, which describes the required roles. That way, the method still portrays what roles you need for the given command, but the actual validation can be done in this Handler Enhancer for you.
Now, let's circle back to your question:
Is it better to do the authorization in my controller layer to avoid the additional overhead or should I see it more as good practice to let my CommandHandlers decide whether the command is valid for the aggregate?
I think it's fine to do it in the aggregate, potentially making it cleaner through use of a Handler Enhancer. When it comes to your concern in the Saga, well, I think you should see that separate. The Saga handles events, facts that something has happened. Ignoring that fact because somebody whom initiated the operations which led to this fact doesn't have the rights doesn't resolve the point that it still has happened. Added, you are indeed not guaranteed on the timing of the Saga at all. Maybe your Saga deals with historical events, meaning it is completely out of scope.
If possible within your system, I would regard any command the Saga wants to publish as being sent by a "system user". The Saga is not something your users (which have specific roles) will directly influence; it is all indirect. The Saga is internal to your system, hence it is the system describing the intent to perform an operation.
That's my two cents to the situation, hope this helps you out #Vincent!
So currently I've come upon a very real problem in writing "correct" golang. I have an object (for the sake of simplicity lets think of it as a map[string]string) and I want it to hold a "shared" state between multiple gortuines.
Currently the implementation goes something like this:
//Inside shared_state.go
var sharedMap map[string]string = make(map[string]string)
var mutex sync.RWMutex = sync.RWMutex{}
func Add(k string, v string) bool {
mutex.Lock()
if _, exists := sharedMap[k]; exists {
mutex.Unlock()
return false
}
tokenMap[k] = v
mutex.Unlock()
return true
}
//Other methods to access, modify... etc
Whilst this does do the job is quite an ugly implementation by go standards, which encourage modeling concurrency using message.
Are there easy ways of modeling shared state using messages that I am blatantly unaware of ? Or am I forced to use mutexes in this kind of cases ?
You don't "model shared state using messages", you use messages instead of shared state, which requires designing the application based on different fundamentals. It is generally not a matter of rewriting a mutex as a channel, but a completely different implementation approach, and that approach won't be applicable to all scenarios where you need to synchronize operations. If a shared map is the best approach for your situation, then a mutex is the correct way to synchronize access to it.
As an example from my own experience, I've developed applications that allow for changing their configuration at runtime. Rather than having a shared Config object and synchronizing access to it, I give each main goroutine a channel on which it can receive configuration updates. When the config changes, the update is sent to all the listeners. When a listener gets a config change, it can complete its current operation, then deal with the config change in whatever way is appropriate to that routine - it may just update its local copy of the config, it may close connections to external resources and open new ones, etc. Instead of sharing data, I'm sending and receiving events, which is a fundamentally different design.
maybe its stupid question, but I confused my self.
I use Dapper with Autofac (DI) and in my business layer I always use the following construction:
db.Open(); //db is IDbConnection instance
db.InsertOrDeleteOrUpdate(...)
db.Close();
whether a good solution to hide db.Open() and db.Close() by delegate method? For example:
dbHelper.do(db => db.InsertOrDeleteOrUpdate(...));
What do you do in such cases?
Redundant code is not the only issue in this code. Your code is hitting the database for every action. This will hit the performance if your DBServer is deployed on remote machine. Refer this answer.
Have you ever come across UnitOfWork pattern? It is very good solution for handling connections on some higher level.
Connection Factory is another good alternative and it can be used with UnitOfWork. But I personally prefer only UnitOfWork.
Refer this answer for sample code of UnitOfWork.
Once you choose from above, it is easy to reduce the redundancy by using "connection per request" pattern. You need to identify centralized location in your code where to create DalSession and where to dispose it.
In Go, if we have a type with a method that starts some looped mechanism (polling A and doing B forever) is it best to express this as:
// Run does stuff, you probably want to run this as a goroutine
func (t Type) Run() {
// Do long-running stuff
}
and document that this probably wants to be launched as a goroutine (and let the caller deal with that)
Or to hide this from the caller:
// Run does stuff concurrently
func (t Type) Run() {
go DoRunStuff()
}
I'm new to Go and unsure if convention says let the caller prefix with 'go' or do it for them when the code is designed to run async.
My current view is that we should document and give the caller a choice. My thinking is that in Go the concurrency isn't actually part of the exposed interface, but a property of using it. Is this right?
I had your opinion on this until I started writing an adapter for a web service that I want to make concurrent. I have a go routine that must be started to parse results that are returned to the channel from the web calls. There is absolutely no case in which this API would work without using it as a go routine.
I then began to look at packages like net/http. There is mandatory concurrency within that package. It is documented at the interface level that it should be able to be used concurrently, however the default implementations automatically use go routines.
Because Go's standard library commonly fires of go routines within its own packages, I think that if your package or API warrants it, you can handle them on your own.
My current view is that we should document and give the caller a choice.
I tend to agree with you.
Since Go makes it so easy to run code concurrently, you should try to avoid concurrency in your API (which forces clients to use it concurrently). Instead, create a synchronous API, and then clients have the option to run it synchronously or concurrently.
This was discussed in a talk a couple years ago: Twelve Go Best Practices
Slide 26, in particular, shows code more like your first example.
I would view the net/http package as an exception because in this case, the concurrency is almost mandatory. If the package didn't use concurrency internally, the client code would almost certainly have to. For example, http.Client doesn't (to my knowledge) start any goroutines. It is only the server that does so.
In most cases, it's going to be one line of the code for the caller either way:
go Run() or StartGoroutine()
The synchronous API is no harder to use concurrently and gives the caller more options.
There is no 'right' answer because circumstances differ.
Obviously there are cases where an API might contain utilities, simple algorithms, data collections etc that would look odd if packaged up as goroutines.
Conversely, there are cases where it is natural to expect 'under-the-hood' concurrency, such as a rich IO library (http server being the obvious example).
For a more extreme case, consider you were to produce a library of plug-n-play concurrent services. Such an API consists of modules each having a well-described interface via channels. Clearly, in this case it would inevitably involve goroutines starting as part of the API.
One clue might well be the presence or absence of channels in the function parameters. But I would expect clear documentation of what to expect either way.