I have the following component which is used by multiple threads since it is being invoked by a listener (Kafka consumer).
#Component
public class SampleClass {
private RuleFactory ruleFactory;
public SampleClass(RuleFactory ruleFactory) {
this.ruleFactory = ruleFactory;
}
void sampleFunction(final SampleObject sampleObject) {
ruleFactory
.getRules().stream()
.filter(ruleFilter -> ruleFilter.getPredicate().test(sampleObject))
.findFirst()
.ifPresent(caseWinner -> caseWinner.applyChanges().accept(sampleObject));
}
}
The method doesn't change the state of the class, but is shares another component the RuleFactory. Which doesn't have any mutable attributes.
Is this method thread safe ? An answer I got, was, it is not since you apply changes to an object which is passed as a parameter. Is this valid?
I can't think of any case other than two threads passing the same object and process it in parallel.
Should this method be synchronized? Is this final keyword useless in terms of thread safety?
The method is thread-safe if all of the following are true:
All of the ruleFactory methods called are thread safe, that is, none of them change internal state of ruleFactory.
ruleFilter and caseWinner are thread-safe
If another thread has a reference to sampleObject, then the state of sampleObject must not be modified.
You already said RuleFactory is thread safe.
If you modify sampleObject in this function and if another thread has reference to sampleObject, then there is a race condition. Synchronizing this function will only work if all the other threads are using the method of the same SampleClass instance to modify sampleObject. You can use sampleObject itself, but you have to make sure all other threads that access to sampleObject also use synchronize blocks:
synchronized(sampleObject) {
// read or write sampleObject
}
Related
I recently ran into this error. I have never came across this before so wondering!
Cannot access a disposed context instance. A common cause of this error is disposing a context instance that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling 'Dispose' on the context instance, or wrapping it in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.
Object name: 'OrderDbContext'.
The only thing i missed which produced this error is the await keyword in the controller action method before _mediator.Send(checkoutCommand); Once I added the await keyword this error vanished.
What (the heck) is wrong with missing this await keyword? The error does not explicitly state that. Can someone explain why missing an await keyword cause an error that database context is disposed?
Controller Action:
public async Task<IActionResult> Checkout(CheckoutOrderCommand checkoutCommand)
{
**var id = _mediator.Send(checkoutCommand);**
return CreatedAtRoute("Get Order by Id", id);
}
CQRS Mediatr Instructions
public class CheckoutOrderCommandHandler : IRequestHandler<CheckoutOrderCommand, int>
{
private readonly IOrderUow _orderUow;
private readonly IMapper _mapper;
public CheckoutOrderCommandHandler(IOrderUow orderUow,
IMapper mapper)
{
_orderUow = orderUow;
_mapper = mapper;
}
public async Task<int> Handle(CheckoutOrderCommand request, CancellationToken cancellationToken)
{
var order = _mapper.Map<Order>(request);
// Populate base entity fields
order.CreatedBy = Constants.CurrentUser;
// create new order
var newOrder = await _orderUow.AddOrderAsync(order);
return newOrder.Id;
}
}
Unit of work implementation
public class OrderUow : IOrderUow
{
private readonly OrderDbContext _orderContext;
public OrderUow(OrderDbContext orderContext)
{
_orderContext = orderContext;
}
public async Task<Order> AddOrderAsync(Order order)
{
try
{
await _orderContext.Orders.AddAsync(order);
await _orderContext.SaveChangesAsync();
}
catch (Exception ex)
{
}
return order;
}
}
Missing an await without explicitly handling the task that is returned will mean that the code calling the asynchronous method will not create a resumption point and instead will continue executing to completion, in your case leading to the disposal of the DbContext.
Asyncrhronous code is multi-threaded behind the scenes. Think of it this way, your web request enters on Thread #1 which creates a DbContext instance via an IoC container, calls an asynchronous method, then returns. When the code calls an async method, it automatically hands that code off to a worker thread to execute. By adding await you tell .Net to create a resume point to come back to. That may be the original calling thread, or a new worker thread, though the important thing is that the calling code will resume only after the async method completes. Without await, there is no resume point, so the calling code continues after the async method call. This can lead to all kinds of bad behaviour. The calling code can end up completing and disposing the DbContext (what you are seeing) or if it calls another operation against the DbContext you could end up with an exception complaining about access across multiple threads since a DbContext detects that and does not allow access across threads.
You can observe the threading behaviour by inspecting Thread.CurrentThread.ManagedThreadId before the async method call, inside the async method, then after the async method call.
Without the await you would see Thread #X before the method call, then Thread #Y inside the async method, then back to Thread #X after. While debugging it would most likely appear to work since the worker thread would likely finish by the time you were done with the breakpoints, but at runtime that worker thread (Y) would have been started, but the code after the call on thread X would continue running, ultimately disposing of your DbContext likely before Thread Y was finished executing. With the await call, you would likely see Thread #X before the method call, Thread #Y inside, then Thread #Z after the call. The breakpoint after the async call would only be triggered after the async method completes. Thread #X would be released while waiting for the async method, but the DbContext etc. wouldn't be disposed until the resumption point created by awaiting the async method had run. It is possible that the code can resume on the original thread (X) if that thread is available in the pool.
This is a very good reason to follow the general naming convention for asynchronous methods to use the "Async" suffix for the method name. If your Mediator.Send method is async, consider naming it "SendAsync" to make missing await statements a lot more obvious. Also check those compiler warnings! If your project has a lot of warnings that are being ignored, this is a good reason to go through and clean them up so new warnings like this can be spotted quickly and fixed. This is one of the first things I do when starting with a new client is look at how many warnings the team has been ignoring and pointing out some of the nasty ones they weren't aware were lurking in the code base hidden by the noise.
Hi guys i'm trying to improve performance of some computation in my system. Basically I want to generate a series of actions based on some data. This doesn't scale well and I want to try doing this in parallel and getting a result after (a bit like how futures work)
I have an interface with a series of implementations that get a collection of actions. And want to call all these in parallel and await the results at the end.
The issue is that, when I view the logs its clearly doing this sequentially and waiting on each action getter before going to the next one. I thought the async would do this asynchronously, but its not.
The method the runBlocking is in, is within a spring transaction. Maybe that has something to do with it.
runBlocking {
val actions = actionsReportGetters.map { actionReportGetter ->
async {
getActions(actionReportGetter, abstractUser)
}
}.awaitAll().flatten()
allActions.addAll(actions)
}
private suspend fun getActions(actionReportGetter: ActionReportGetter, traderUser: TraderUser): List<Action> {
return actionReportGetter.getActions(traderUser)
}
interface ActionReportGetter {
fun getActions(traderUser: TraderUser): List<Action>
}
Looks like you are doing some blocking operation in ActionReportGetter.getActions in a single threaded environment (probably in the main thread).
For such IO operations you should launch your coroutines in Dispatchers.IO which provides a thread pool with multiple threads.
Update your code to this:
async(Dispatchers.IO) { // Switch to IO dispatcher
getActions(actionReportGetter, abstractUser
}
Also getActions need not be a suspending function here. You can remove the suspend modifier from it.
We are using java 8 parallel stream to process a task, and we are submitting the task through ForkJoinPool#submit. We are not using jvm wide ForkJoinPool.commonPool, instead we are creating our own custom pool to specify the parallelism and storing it as static variable.
We have validation framework, where we subject a list of tables to a List of Validators, and we submit this job through the custom ForkJoinPool as follows:
static ForkJoinPool forkJoinPool = new ForkJoinPool(4);
List<Table> tables = tableDAO.findAll();
ModelValidator<Table, ValidationResult> validator = ValidatorFactory
.getInstance().getTableValidator();
List<ValidationResult> result = forkJoinPool.submit(
() -> tables.stream()
.parallel()
.map(validator)
.filter(result -> result.getValidationMessages().size() > 0)
.collect(Collectors.toList())).get();
The problem we are having is, in the downstream components, the individual validators which run on separate threads from our static ForkJoinPool rely on tenant_id, which is different for every request and is stored in an InheritableThreadLocal variable. Since we are creating a static ForkJoinPool, the threads pooled by the ForkJoinPool will only inherit the value of the parent thread, when it is created first time. But these pooled threads will not know the new tenant_id for the current request. So for subsequent execution these pooled threads are using old tenant_id.
I tried creating a custom ForkJoinPool and specifying ForkJoinWorkerThreadFactory in the constructor and overriding the onStart method to feed the new tenant_id. But that doesnt work, since the onStart method is called only once at creation time and not during individual execution time.
Seems like we need something like the ThreadPoolExecutor#beforeExecute which is not available in case of ForkJoinPool. So what alternative do we have if we want to pass the current thread local value to the statically pooled threads?
One workaround would be to create the ForkJoinPool for each request, rather than make it static but we wouldn't want to do it, to avoid the expensive nature of thread creation.
What alternatives do we have?
I found the following solution that works without changing any underlying code. Basically, the map method takes a functional interface which I am representing as a lambda expression. This expression adds a preExecution hook to set the new tenantId in the current ThreadLocal and cleaning it up in postExecution.
forkJoinPool.submit(tables.stream()
.parallel()
.map((item) -> {
preExecution(tenantId);
try {
return validator.apply(item);
} finally {
postExecution();
}
}
)
.filter(validationResult ->
validationResult.getValidationMessages()
.size() > 0)
.collect(Collectors.toList())).get();
The best option in my view would be to get rid of the thread local and pass it as an argument instead. I understand that this could be a massive undertaking though. Another option would be to use a wrapper.
Assuming that your validator has a validate method you could do something like:
public class WrappingModelValidator implements ModelValidator<Table. ValidationResult> {
private final ModelValidator<Table. ValidationResult> v;
private final String tenantId;
public WrappingModelValidator(ModelValidator<Table. ValidationResult> v, String tenantId) {
this.v = v;
this.tenantId = tenantId;
}
public ValidationResult validate(Table t) {
String oldValue = YourThreadLocal.get();
YourThreadLocal.set(tenantId);
try {
return v.validate(t);
} finally {
YourThreadLocal.set(oldValue);
}
}
}
Then you simply wrap your old validator and it will set the thread local on entry and restore it when done.
Is there a way to pass data to dependencies registered with either Execution Context Scope or Lifetime Scope in Simple Injector?
One of my dependencies requires a piece of data in order to be constructed in the dependency chain. During HTTP and WCF requests, this data is easy to get to. For HTTP requests, the data is always present in either the query string or as a Request.Form parameter (and thus is available from HttpContext.Current). For WCF requests, the data is always present in the OperationContext.Current.RequestContext.RequestMessage XML, and can be parsed out. I have many command handler implementations that depend on an interface implementation that needs this piece of data, and they work great during HTTP and WCF scoped lifestyles.
Now I would like to be able to execute one or more of these commands using the Task Parallel Library so that it will execute in a separate thread. It is not feasible to move the piece of data out into a configuration file, class, or any other static artifact. It must initially be passed to the application either via HTTP or WCF.
I know how to create a hybrid lifestyle using Simple Injector, and already have one set up as hybrid HTTP / WCF / Execution Context Scope (command interfaces are async, and return Task instead of void). I also know how to create a command handler decorator that will start a new Execution Context Scope when needed. The problem is, I don't know how or where (or if I can) "save" this piece of data so that is is available when the dependency chain needs it to construct one of the dependencies.
Is it possible? If so, how?
Update
Currently, I have an interface called IProvideHostWebUri with two implementations: HttpHostWebUriProvider and WcfHostWebUriProvider. The interface and registration look like this:
public interface IProvideHostWebUri
{
Uri HostWebUri { get; }
}
container.Register<IProvideHostWebUri>(() =>
{
if (HttpContext.Current != null)
return container.GetInstance<HttpHostWebUriProvider>();
if (OperationContext.Current != null)
return container.GetInstance<WcfHostWebUriProvider>();
throw new NotSupportedException(
"The IProvideHostWebUri service is currently only supported for HTTP and WCF requests.");
}, scopedLifestyle); // scopedLifestyle is the hybrid mentioned previously
So ultimately unless I gut this approach, my goal would be to create a third implementation of this interface which would then depend on some kind of context to obtain the Uri (which is just constructed from a string in the other 2 implementations).
#Steven's answer seems to be what I am looking for, but I am not sure how to make the ITenantContext implementation immutable and thread-safe. I don't think it will need to be made disposable, since it just contains a Uri value.
So what you are basically saying is that:
You have an initial request that contains some contextual information captured in the request 'header'.
During this request you want to kick off a background operation (on a different thread).
The contextual information from the initial request should stay available when running in the background thread.
The short answer is that Simple Injector does not contain anything that allows you to do so. The solution is in creating a piece of infrastructure that allows moving this contextual information along.
Say for instance you are processing command handlers (wild guess here ;-)), you can specify a decorator as follows:
public class BackgroundProcessingCommandHandlerDecorator<T> : ICommandHandler<T>
{
private readonly ITenantContext tenantContext;
private readonly Container container;
private readonly Func<ICommandHandler<T>> decorateeFactory;
public BackgroundProcessingCommandHandlerDecorator(ITenantContext tenantContext,
Container container, Func<ICommandHandler<T>> decorateeFactory) {
this.tenantContext = tenantContext;
this.container = container;
this.decorateeFactory = decorateeFactory;
}
public void Handle(T command) {
// Capture the contextual info in a local variable
// NOTE: This object must be immutable and thread-safe.
var tenant = this.tenantContext.CurrentTenant;
// Kick off a new background operation
Task.Factory.StartNew(() => {
using (container.BeginExecutionContextScope()) {
// Load a service that allows setting contextual information
var context = this.container.GetInstance<ITenantContextApplier>();
// Set the context for this thread, before resolving the handler
context.SetCurrentTenant(tenant);
// Resolve the handler
var decoratee = this.decorateeFactory.Invoke();
// And execute it.
decoratee.Handle(command);
}
});
}
}
Note that in the example I make use of an imaginary ITenantContext abstraction, assuming that you need to supply the commands with information about the current tenant, but any other sort of contextual information will obviously do as well.
The decorator is a small piece of infrastructure that allows you to process commands in the background and it is its responsibility to make sure all the required contextual information is moved to the background thread as well.
To be able to do this, the contextual information is captured and used as a closure in the background thread. I created an extra abstraction for this, namely ITenantContextApplier. Do note that the tenant context implementation can implement both the ITenantContext and the ITenantContextApplier interface. If however you define the ITenantContextApplier in your composition root, it will be impossible for the application to change the context, since it does not have a dependency on ITenantContextApplier.
Here's an example:
// Base library
public interface ITenantContext { }
// Business Layer
public class SomeCommandHandler : ICommandHandler<Some> {
public SomeCommandHandler(ITenantContext context) { ... }
}
// Composition Root
public static class CompositionRoot {
// Make the ITenantContextApplier private so nobody can see it.
// Do note that this is optional; there's no harm in making it public.
private interface ITenantContextApplier {
void SetCurrentTenant(Tenant tenant);
}
private class AspNetTenantContext : ITenantContextApplier, ITenantContext {
// Implement both interfaces
}
private class BackgroundProcessingCommandHandlerDecorator<T> { ... }
public static Container Bootstrap(Container container) {
container.RegisterPerWebRequest<ITenantContext, AspNetTenantContext>();
container.Register<ITenantContextApplier>(() =>
container.GetInstance<ITenantContext>() as ITenantContextApplier);
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(BackgroundProcessingCommandHandlerDecorator<>));
}
}
A different approach would be to just make the complete ITenantContext available to the background thread, but to be able to pull this off, you need to make sure that:
The implementation is immutable and thus thread-safe.
The implementation doesn't require disposing, because it will typically be disposed when the original request ends.
So, what happened in my project was the following:
I have a singleton which is defined in a usual way:
Singleton* Singleton::getInstance()
{
static Singleton instance;
return &instance;
}
in its constructor, this singleton object initializes a CORBA ORB and started running it in a separate (boost)thread(wrapper) similar to this:
CorbaController::CorbaController()
{
}
CorbaController::~CorbaController()
{
if(!CORBA::is_nil(_orb))
_orb->shutdown(true);
}
void CorbaController::run()
{
_orb->run();
}
bool CorbaController::initCorba(const std::string& ip, unsigned long port)
{
// init CORBA
// ...
// let the ORB execute in a dedicated thread since the run operation is blocking
start();
return true;
}
Now the normal behavior when destructing the CorbaController is that it calls shutdown on the ORB in its destructor, the run method then jumps out of orb.run() and finishes the separate thread. However this only works if the CorbaController is deleted explicitly or defined as a local or class variable which then runs out of scope at some point.
If I rely on the singleton's static variable being cleaned up at the end of the program though, the orb.shutdown() deadlocks because the ACE/TAO lib can't aquire a semaphore on some object to be destructed with the ORB shutdown.
Does anybody have an idea of what might be the problem here? Can this be a threading problem, i.e. that the thread which constructrs the singleton (and also runs the main function of my application) is different from the thread cleaning up static memory instances?
You have to shuytdown and destroy the ORB during the regular shutdown of the application, doing it in the destructor of a static object is really too late. Add a shutdown() method to your CORBA Controller which does a shutdown and destroy of the ORB.