Concerned about the size of my Aggregate Root [closed] - performance

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am new to DDD and have a concern about the size of my Aggregate Root. The object graph is like the image below. (They are collections). The problem is all of the entities depend on the state of the AggregateRoot (Event). My question is: how do I break the aggregate into smaller aggregates? It's like I have a "God" like aggregate root that just manages everything.
This is very simplistic view of my domain:
and these are the rules:
An event has a number of different states. (implemented state design
pattern here).
An event has a collection of sessions. (but only 1 can be active at a time and only if the event is in the correct state).
A session has two states: Active and Ended.
A session has a collection of Guests.
A session has a collection of photos. (Maximum
of 10).
When a session is deleted. It should delete all its children.
When a session has ended and a photo is deleted it should
check to see if there are any other photos that belong to the
session. If not it should also delete the session.
When a session has ended and a photo is deleted sometimes it should throw an exception depending on the state of the event.
When a session is active and a photo is deleted. It should not worry about whether or not the session has any other photos or not.
When a session ends it must have at least 1 photo and at least 1 guest.
A photo can be updated but only if the event is in the right state.
When an event is deleted it should delete all its children.
Edit: I have divided the 1 aggregate into smaller aggregates so that Event, Session and Photo are all ARs. The issues is a session needs to perform a check on the Event AR before starting. Is it perfectly ok to inject an event object into the sessions start method Session.Start(Event #event) or will I have concurrency issues as outlined in some of the comments?

As a first step, the following 3 articles will be invaluable: http://dddcommunity.org/library/vernon_2011/
With DDD you are splitting the entities up in to boundaries where the state is valid after a single operation from an external source completes (i.e. a method call).
Think in terms of the business problem you are trying to solve - you have used the word delete a lot...
Does delete even have a place in the wording of the business experts for whom you are designing the system? Thinking in terms of the real world and not database infrastructure, unless you can create a time machine to travel back in time and stop an event from starting and therefore change history, the word delete has no real world analogy.
If you are forcing yourself to delete children on delete, that means that operation would need to become a transaction so things that may not make sense to sit inside the aggregate root are forced too (so that the state of the entity and all its children can be controlled and assured to be valid once the method call completes). Yes there are things where you can do with a transaction across multiple aggregate roots, but these are very rare situations and to be avoided if possible.
Eventual consistency is used as an alternative to transactions and reduce complexity, if you speak to the person for whom the system is being designed, you will probably find that a delay of seconds or minutes is more than acceptable. This is plenty of time to fire off an event, to which some other business logic is listening and takes necessary action. Using eventual consistency removes the headaches that come with transactions.
Photos could take up a lot of storage yes, so you would probably need a cleanup mechanism that runs after an event is marked as finished. I would probably fire off an event once the session is marked closed, a different system somewhere else would listen for this event and after 1 year (or whatever makes sense for you) remove this from a server... assuming you used an array of string[10] for your URLs.
If this is the maximum extent of your business logic, then don't only focus on DDD, it seems like this could be a good fit for Entity Framework which is essentially CRUD and has cascade deletes built in.
Edits answer
What is a photo, does it contain attributes? Is it not instead something like a Url to a photo, or a path to a picture file?
I'm not yet thinking of databases, that should be the very last thing that is thought of and the solution should be database/technology agnostic. I see the rules as:
An event has many sessions.
A Session has the following states: NotStarted, Started and Ended.
A Session has a collection of Guests, I'm going to assume these are unique (in that two guests with the same name are not the same, so a guest should be an aggregate root).
An Event has one active Session.
When there are no active Sessions, an Event can be marked as Finished.
No Sessions can be started once an Event is marked as Finished.
A session has a collection of up to 10 photos.
When a session has ended, a photo cannot be removed.
A Session can not start if there are no Guests A Session can not end if there are no Photos.
You cannot return the Session directly, as a user of your code may call Start() on the session, you will need someway of checking with the Event that this cannot be started, so you can chain up to the root this is why I pass in the event to the Session. If you don't like this way, then just put the methods that manipulate the Session on the Event (so everything is accessed via the Event, which is enforcing all the rules).
In the simplest case, I see the photo as a string (value object) in the Session entity. As a first stab I would do something like this:
// untested, do not know if will compile!
public class Event
{
List<Session> sessions = new List<Session>();
bool isEventClosed = false;
EventId NewSession(string description, string speaker)
{
if(isEventClosed==true)
throw new InvalidOperationException("cannot add session to closed event");
// create a new session, what will you use for identity, string, guid etc
var sessionId = new SessionId(); // in this case autogenerate a guid inside this class
this.sessions.Add(new Session(sessionId, description, speaker));
}
Session GetSession(EventId id)
{
reutrn this.sessions.FirstOrDefault(x => x.id == id);
}
bool CanStartSession(Session session)
{
// TO DO: do a check session is in our array!!
if(this.isEventClosed == true)
return false;
foreach(var session in sessions)
{
if(session.IsStarted()==true)
return false;
}
return true;
}
}
public class Session
{
List<GuestId> guests = new List<GuestId>(); // list of guests
List<string> photoUrls = new List<string>(); // strings to photo urls
readonly SessionId id;
DateTime started = null;
DateTime ended = null;
readonly Event parentEvent;
public Session(Event parent, SessionId id, string description, string speaker)
{
this.id = id;
this.parentEvent = parent;
// store all the other params
}
void AddGuest(GuestId guestId)
{
this.guests.Add(guestId);
}
void RemoveGuest(GuestId guestId)
{
if(this.IsEnded())
throw new InvalidOperationException("cannot remove guest after event has ended");
}
void AddPhoto(string url)
{
if(this.photos.Count>10)
throw new InvalidOperationException("cannot add more than 10 photos");
this.photos.Add(url);
}
void Start()
{
if(this.guests.Count == 0)
throw new InvalidOperationException("cant start session without guests");
if(CanBeStarted())
throw new InvalidOperationException("already started");
if(this.parentEvent.CanStartSession()==false)
throw new InvalidOperationException("another session at our event is already underway or the event is closed");
this.started = DateTime.UtcNow;
}
void End()
{
if(IsEnded()==true)
throw new InvalidOperationException("session already ended");
if(this.photos.length==0)
throw new InvalidOperationException("cant end session without photos");
this.ended = DateTime.UtcNow;
// can raise event here that session has ended, see mediator/event-hander pattern
}
bool CanBeStarted()
{
return (IsStarted()==false && IsEnded()==false);
}
bool IsStarted()
{
return this.started!=null;
}
bool IsEnded()
{
return this.ended!=null;
}
}
No warranty on the above, and may well need to change over time as the understanding evolves and as you see better ways to re-factor the code.
A guest cannot be removed once a session has ended - this logic has been added with a simple test.
Talk about deletion of guests and leaving sessions with 0 guests - you have stated that guests cannot be removed once an event has ended... by allowing that to happen at any point would be in violation of that business rule, so it can't ever happen, ever. Besides, using the term to delete a person in your problem space makes no sense as people cannot be deleted, they existed and will always have a record that they existed. This database term delete belongs in the database, not in this domain model as you have described it.
Is this.parentEvent.CanStartSession()==false safe? No it is not multithread safe, but commands would be ran independently, perhaps in parallel, each in their own thread:
void HandleStartSessionCommand(EventId eventId, SessionId sessionId)
{
// repositories etc, have been provided in constructor
var event = repository.GetById(eventId);
var session = event.GetSession(sessionId);
session.Start();
repository.Save(session);
}
If we were using event sourcing then inside the repository it is writing the stream of changed events in a transaction, and the aggregate root's current version is used so we can detect any changes. So in terms of event sourcing, a change to the Session would indeed be a change to its parent aggregate root, since it doesn't make sense to refer to a Session event in its own right (it will always be a Event event, it cannot exist independently). Obviously the code I have given in my example is not event sourced but could be written as so.
If event sourcing is not used then depending on the transaction implementation, you could wrap the command handler in a transaction as a cross cutting concern:
public TransactionalCommandHandlerDecorator<TCommand>
: ICommandHandler<TCommand>
{
private ICommandHandler<TCommand> decoratedHandler;
public TransactionalCommandHandlerDecorator(
ICommandHandler<TCommand> decoratedHandler)
{
this.decoratedHandler = decoratedHandler;
}
public void Handle(TCommand command)
{
using (var scope = new TransactionScope())
{
this.decoratedHandler.Handle(command);
scope.Complete();
}
}
}
In short, we are using the infrastructure implementation to provide concurrency safety.

Related

Handling transition to state for multiple events

I have a MassTransitStateMachine that orchestrates a process which involves creating multiple events.
Once all of the events are done, I want the state to transition to a 'clean up' phase.
Here is the relevant state declaration and filter function:
During(ImportingData,
When(DataImported)
// When we get a data imported event, mark this source as done.
.Then(MarkImportCompletedForLocation),
When(DataImported, IsAllDataImported)
// Once all are done, we can transition to cleaning up...
.Then(CleanUpSources)
.TransitionTo(CleaningUp)
);
...snip...
private static bool IsAllDataImported(EventContext<DataImportSagaState, DataImportMappingCompletedEvent> ctx)
{
return ctx.Instance.Locations.Values.All(x => x);
}
So while the state is ImportingData, I expect to receive multiple DataImported events. Each event marks its location as done so that that IsAllDataImported method can determine if we should transition to the next state.
However, if the last two DataImported events arrive at the same time, the handler for transitioning to the CleaningUp phase fires twice, and I end up trying to perform the clean up twice.
I could solve this in my own code, but I was expecting the state machine to manage this. Am I doing something wrong, or do I just need to handle the contention myself?
The solution proposed by Chris won't work in my situation because I have multiple events of the same type arriving. I need to transition only when all of those events have arrived. The CompositeEvent construct doesn't work for this use case.
My solution to this was to raise a new AllDataImported event during the MarkImportCompletedForLocation method. This method now handles determining whether all sub-imports are complete in a thread safe way.
So my state machine definition is:
During(ImportingData,
When(DataImported)
// When we get a data imported event, mark the URI in the locations list as done.
.Then(MarkImportCompletedForLocation),
When(AllDataImported)
// Once all are done, we can transition to cleaning up...
.TransitionTo(CleaningUp)
.Then(CleanUpSources)
);
The IsAllDataImported method is no longer needed as a filter.
The saga state has a Locations property:
public Dictionary<Uri, bool> Locations { get; set; }
And the MarkImportCompletedForLocation method is defined as follows:
private void MarkImportCompletedForLocation(BehaviorContext<DataImportSagaState, DataImportedEvent> ctx)
{
lock (ctx.Instance.Locations)
{
ctx.Instance.Locations[ctx.Data.ImportSource] = true;
if (ctx.Instance.Locations.Values.All(x => x))
{
var allDataImported = new AllDataImportedEvent {CorrelationId = ctx.Instance.CorrelationId};
this.CreateEventLift(AllDataImported).Raise(ctx.Instance, allDataImported);
}
}
}
(I've just written this so that I understand how the general flow will work; I recognise that the MarkImportCompletedForLocation method needs to be more defensive by verifying that keys exist in the dictionary.)
You can use a composite event to accumulate multiple events into a subsequent event that fires when the dependent events have fired. This is defined using:
CompositeEvent(() => AllDataImported, x => x.ImportStatus, DataImported, MoreDataImported);
During(ImportingData,
When(DataImported)
.Then(context => { do something with data }),
When(MoreDataImported)
.Then(context => { do smoething with more data}),
When(AllDataImported)
.Then(context => { okay, have all data now}));
Then, in your state machine state instance:
class DataImportSagaState :
SagaStateMachineInstance
{
public int ImportStatus { get; set; }
}
This should address the problem you are trying to solve, so give it a shot. Note that event order doesn't matter, they can arrive in any order as the state of which events have been received is in the ImportStatus property of the instance.
The data of the individual events is not saved, so you'll need to capture that into the state instance yourself using .Then() methods.

Domain Driven Design - complex validation of commands across different aggregates

I've only began with DDD and currently trying to grasp the ways to do different things with it. I'm trying to design it using asynchronous events (no event-sourcing yet) with CQRS. Currently I'm stuck with validation of commands. I've read this question: Validation in a Domain Driven Design , however, none of the answers seem to cover complex validation across different aggregate roots.
Let's say I have these aggregate roots:
Client - contains list of enabled services, each service can have a value-object list of discounts and their validity.
DiscountOrder - an order to enable more discounts on some of the services of given client, contains order items with discount configuration.
BillCycle - each period when bills are generated is described by own billcycle.
Here's the usecase:
Discount order can be submitted. Each new discount period in discount order should not overlap with any of BillCycles. No two discounts of same type can be active at the same time on one service.
Basically, using Hibernate in CRUD style, this would look something similar to (java code, but question is language-agnostic):
public class DiscountProcessor {
...
#Transactional
public void processOrder(long orderId) {
DiscOrder order = orderDao.get(orderId);
BillCycle[] cycles = billCycleDao.getAll();
for (OrderItem item : order.getItems()) {
//Validate billcycle overlapping
for (BillCycle cycle : cycles) {
if (periodsOverlap(cycle.getPeriod(), item.getPeriod())) {
throw new PeriodsOverlapWithBillCycle(...);
}
}
//Validate discount overlapping
for (Discount d : item.getForService().getDiscounts()) {
if (d.getType() == item.getType() && periodsOverlap(d.getPeriod(), item.getPeriod())) {
throw new PeriodsOverlapWithOtherItems(...);
}
}
//Maybe some other validations in future or stuff
...
}
createDiscountsForOrder(order);
}
}
Now here are my thoughts on implementation:
Basically, the order can be in three states: "DRAFT", "VALIDATED" and "INVALID". "DRAFT" state can contain any kind of invalid data, "VALIDATED" state should only contain valid data, "INVALID" should contain invalid data.
Therefore, there should be a method which tries to switch the state of the order, let's call it order.validate(...). The method will perform validations required for shift of state (DRAFT -> VALIDATED or DRAFT -> INVALID) and if successful - change the state and transmit a OrderValidated or OrderInvalidated events.
Now, what I'm struggling with, is the signature of said order.validate(...) method. To validate the order, it requires several other aggregates, namely BillCycle and Client. I can see these solutions:
Put those aggregates directly into the validate method, like
order.validateWith(client, cycles) or order.validate(new
OrderValidationData(client, cycles)). However, this seems a bit
hackish.
Extract the required information from client and cycle
into some kind of intermediate validation data object. Something like
order.validate(new OrderValidationData(client.getDiscountInfos(),
getListOfPeriods(cycles)).
Do validation in a separate service
method which can do whatever it wants with whatever aggregates it
wants (basically similar to CRUD example above). However, this seems
far from DDD, as method order.validate() will become a dummy state
setter, and calling this method will make it possible to bring an
order unintuitively into an corrupted state (status = "valid" but
contains invalid data because nobody bothered to call validation
service).
What is the proper way to do it, and could it be that my whole thought process is wrong?
Thanks in advance.
What about introducing a delegate object to manipulate Order, Client, BillCycle?
class OrderingService {
#Injected private ClientRepository clientRepository;
#Injected private BillingRepository billRepository;
Specification<Order> validSpec() {
return new ValidOrderSpec(clientRepository, billRepository);
}
}
class ValidOrderSpec implements Specification<Order> {
#Override public boolean isSatisfied(Order order) {
Client client = clientRepository.findBy(order.getClientId());
BillCycle[] billCycles = billRepository.findAll();
// validate here
}
}
class Order {
void validate(ValidOrderSpecification<Order> spec) {
if (spec.isSatisfiedBy(this) {
validated();
} else {
invalidated();
}
}
}
The pros and cons of your three solutions, from my perspective:
order.validateWith(client, cycles)
It is easy to test the validation with order.
#file: OrderUnitTest
#Test public void should_change_to_valid_when_xxxx() {
Client client = new ClientFixture()...build()
BillCycle[] cycles = new BillCycleFixture()...build()
Order order = new OrderFixture()...build();
subject.validateWith(client, cycles);
assertThat(order.getStatus(), is(VALID));
}
so far so good, but there seems to be some duplicate test code for DiscountOrderProcess.
#file: DiscountProcessor
#Test public void should_change_to_valid_when_xxxx() {
Client client = new ClientFixture()...build()
BillCycle[] cycles = new BillCycleFixture()...build()
Order order = new OrderFixture()...build()
DiscountProcessor subject = ...
given(clientRepository).findBy(client.getId()).thenReturn(client);
given(cycleRepository).findAll().thenReturn(cycles);
given(orderRepository).findBy(order.getId()).thenReturn(order);
subject.processOrder(order.getId());
assertThat(order.getStatus(), is(VALID));
}
#or in mock style
#Test public void should_change_to_valid_when_xxxx() {
Client client = mock(Client.class)
BillCycle[] cycles = array(mock(BillCycle.class))
Order order = mock(Order.class)
DiscountProcessor subject = ...
given(clientRepository).findBy(client.getId()).thenReturn(client);
given(cycleRepository).findAll().thenReturn(cycles);
given(orderRepository).findBy(order.getId()).thenReturn(order);
given(client).....
given(cycle1)....
subject.processOrder(order.getId());
verify(order).validated();
}
order.validate(new OrderValidationData(client.getDiscountInfos(),
getListOfPeriods(cycles))
Same as the above one, you still need to prepare data for both OrderUnitTest and discountOrderProcessUnitTest. But I think this one is better as order is not tightly coupled with Client and BillCycle.
order.validate()
Similar to my idea if you keep validation in the domain layer. Sometimes it is just not any entity's responsibility, consider domain service or specification object.
#file: OrderUnitTest
#Test public void should_change_to_valid_when_xxxx() {
Client client = new ClientFixture()...build()
BillCycle[] cycles = new BillCycleFixture()...build()
Order order = new OrderFixture()...build();
Specification<Order> spec = new ValidOrderSpec(clientRepository, cycleRepository);
given(clientRepository).findBy(client.getId()).thenReturn(client);
given(cycleRepository).findAll().thenReturn(cycles);
subject.validate(spec);
assertThat(order.getStatus(), is(VALID));
}
#file: DiscountProcessor
#Test public void should_change_to_valid_when_xxxx() {
Order order = new OrderFixture()...build()
Specification<Order> spec = mock(ValidOrderSpec.class);
DiscountProcessor subject = ...
given(orderingService).validSpec().thenReturn(spec);
given(spec).isSatisfiedBy(order).thenReturn(true);
given(orderRepository).findBy(order.getId()).thenReturn(order);
subject.processOrder(order.getId());
assertThat(order.getStatus(), is(VALID));
}
Do the 3 possible states reflect your domain or is that just extrapolation ? I'm asking because your sample code doesn't seem to change Order state but throw an exception when it's invalid.
If it's acceptable for the order to stay DRAFT for a short period of time after being submitted, you could have DiscountOrder emit a DiscountOrderSubmitted domain event. A handler catches the event and (delegates to a Domain service that) examines if the submit is legit or not. It would then issue a ChangeOrderState command to make the order either VALIDATED or INVALID.
You could even suppose that the change is legit by default and have processOrder() directly take it to VALIDATED, until proven otherwise by a subsequent INVALID counter-order given by the validation service.
This is not much different from your third solution or Hippoom's one though, except every step of the process is made explicit with its own domain event. I guess that with your current aggregate design you're doomed to have a third party orchestrator (as un-DDD and transaction script-esque as it may sound) that controls the process, since the DiscountOrder aggregate doesn't have native access to all information to tell if a given transformation is valid or not.

NHibernate IPreUpdateEventListener doesn't insert to second table

I'm prototyping some simple audit logging functionality. I have a mid sized entity model (~50 entities) and I'd like to implement audit logging on about 5 or 6. Ultimately I'd like to get this working on Inserts & Deletes as well, but for now I'm just focusing on the updates.
The problem is, when I do session.Save (or SaveOrUpdate) to my auditLog table from within the EventListener, the original object is persisted (updated) correctly, but my AuditLog object never gets inserted.
I think it's a problem with both the Pre and Post event listeners being called to late in the NHibernate save life cycle for the session to still be used.
//in my ISessionFactory Build method
nHibernateConfiguration.EventListeners.PreUpdateEventListeners =
new IPreUpdateEventListener[]{new AuditLogListener()};
//in my AuditLogListener
public class AuditLogListener : IPreUpdateEventListener
{
public bool OnPreUpdate(PreUpdateEvent #event)
{
string message = //code to look at #event.Entity & build message - this works
if (!string.IsNullOrEmpty(message))
AuditLogHelper.Log(message, #event.Session); //Session is an IEventSource
return false; //Don't veto the change
}
}
//In my helper
public static void Log(string message, IEventSource session)
{
var user = session.QueryOver<User>()
.Where(x => x.Name == "John")
.SingleOrDefault();
//have confirmed a valid user is found
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = user
};
(session as ISession).SaveOrUpdate(logItem);
}
When it hits the session.SaveOrUpdate() in the last method, no errors occur. No exceptions are thrown. it seems to succeed and moves on. But nothing happens. The audit log entry never appears in the database.
The only way I've been able to get this to work it to create a completely new Session & Transaction inside this method, but this isn't really ideal, as the code proceeds back out of the listener method, hits the session.Transaction.Commit() in my main app, and if that transaction fails, then I've got an orphaned log message in my audit table for somethign that never happened.
Any pointers where I might be going wrong ?
EDIT
I've also tried to SaveOrUpdate the LogItem using a child session from the events based on some comments in this thread. http://ayende.com/blog/3987/nhibernate-ipreupdateeventlistener-ipreinserteventlistener
var childSession = session.GetSession(EntityMode.Poco);
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = databaseLogin.User
};
childSession.SaveOrUpdate(logItem);
Still nothing appears in my Log table in the db. No errors or exceptions.
You need to create a child session, currentSession.GetSession(EntityMode.Poco), in your OnPreUpdate method and use this in your log method. Depending on your flushmode setting, you might need to flush the child session as well.
Also, any particular reason you want to roll out your own solution? FYI, NHibernate Envers is now a pretty mature library.

How can a JSF/ICEfaces component's parameters be updated immediately?

I have an ICEfaces web app which contains a component with a property linked to a backing bean variable. In theory, variable value is programmatically modified, and the component sees the change and updates its appearance/properties accordingly.
However, it seems that the change in variable isn't "noticed" by the component until the end of the JSF cycle (which, from my basic understanding, is the render response phase).
The problem is, I have a long file-copy operation to perform, and I would like the the inputText component to show a periodic status update. However, since the component is only updated at the render response phase, it doesn't show any output until the Java methods have finished executing, and it shows it all changes accumulated at once.
I have tried using FacesContext.getCurrentInstance().renderResponse() and other functions, such as PushRenderer.render(String ID) to force XmlHttpRequest to initialize early, but no matter what, the appearance of the component does not change until the Java code finishes executing.
One possible solution that comes to mind is to have an invisible button somewhere that is automatically "pressed" by the bean when step 1 of the long operation completes, and by clicking it, it calls step 2, and so on and so forth. It seems like it would work, but I don't want to spend time hacking together such an inelegant solution when I would hope that there is a more elegant solution built into JSF/ICEfaces.
Am I missing something, or is resorting to ugly hacks the only way to achieve the desired behavior?
Multithreading was the missing link, in conjunction with PushRenderer and PortableRenderer (see http://wiki.icesoft.org/display/ICE/Ajax+Push+-+APIs).
I now have three threads in my backing bean- one for executing the long operation, one for polling the status, and one "main" thread for spawning the new threads and returning UI control to the client browser.
Once the main thread kicks off both execution and polling threads, it terminates and it completes the original HTTP request. My PortableRenderer is declared as PortableRender portableRenderer; and in my init() method (called by the class constructor) contains:
PushRenderer.addCurrentSession("fullFormGroup");
portableRenderer = PushRenderer.getPortableRenderer();
For the threading part, I used implements Runnable on my class, and for handling multiple threads in a single class, I followed this StackOverflow post: How to deal with multiple threads in one class?
Here's some source code. I can't reveal the explicit source code I've used, but this is a boiled-down version that doesn't reveal any confidential information. I haven't tested it, and I wrote it in gedit so it might have a syntax error or two, but it should at least get you started in the right direction.
public void init()
{
// This method is called by the constructor.
// It doesn't matter where you define the PortableRenderer, as long as it's before it's used.
PushRenderer.addCurrentSession("fullFormGroup");
portableRenderer = PushRenderer.getPortableRenderer();
}
public void someBeanMethod(ActionEvent evt)
{
// This is a backing bean method called by some UI event (e.g. clicking a button)
// Since it is part of a JSF/HTTP request, you cannot call portableRenderer.render
copyExecuting = true;
// Create a status thread and start it
Thread statusThread = new Thread(new Runnable() {
public void run() {
try {
// message and progress are both linked to components, which change on a portableRenderer.render("fullFormGroup") call
message = "Copying...";
// initiates render. Note that this cannot be called from a thread which is already part of an HTTP request
portableRenderer.render("fullFormGroup");
do {
progress = getProgress();
portableRenderer.render("fullFormGroup"); // render the updated progress
Thread.sleep(5000); // sleep for a while until it's time to poll again
} while (copyExecuting);
progress = getProgress();
message = "Finished!";
portableRenderer.render("fullFormGroup"); // push a render one last time
} catch (InterruptedException e) {
System.out.println("Child interrupted.");
}
});
statusThread.start();
// create a thread which initiates script and triggers the termination of statusThread
Thread copyThread = new Thread(new Runnable() {
public void run() {
File someBigFile = new File("/tmp/foobar/large_file.tar.gz");
scriptResult = copyFile(someBigFile); // this will take a long time, which is why we spawn a new thread
copyExecuting = false; // this will caue the statusThread's do..while loop to terminate
}
});
copyThread.start();
}
I suggest looking at our Showcase Demo:
http://icefaces-showcase.icesoft.org/showcase.jsf?grp=aceMenu&exp=progressBarBean
Under the list of Progress Bar examples is one called Push. It uses Ajax Push (a feature provided with ICEfaces) to do what I think you want.
There is also a tutorial on this page called Easy Ajax Push that walks you through a simple example of using Ajax Push.
http://www.icesoft.org/community/tutorials-samples.jsf

Caching and repeatedly firing NotifyOnChanged

I am attempting to use a file as a trigger to refresh my cached items. When the file is changed, I need to fire an event (every time the file changes). I'm currently using the HostFileChangeMonitor class. Here's where a cached item gets set, using a policy to link it to a file:
private static void SetPolicy(CacheNames cacheName, CacheItem item)
{
string strCacheName = cacheName.ToString();
CacheItemPolicy policy;
if (_policies.TryGetValue(strCacheName, out policy))
{
_cacheObject.Set(item, policy);
return;
}
policy = new CacheItemPolicy();
List<string> filePaths = new List<string> {
string.Format(#"{0}\{1}.txt",Config.AppSettings.CachePath,cacheName.ToString())
};
var changeMonitor = new HostFileChangeMonitor(filePaths);
_cacheObject.Set(item, policy);
changeMonitor.NotifyOnChanged(new OnChangedCallback(RefreshCache));
policy.ChangeMonitors.Add(changeMonitor);
}
The NotifyOnChanged fires only once, however. Because of that, I am currently removing and then re-adding the item to the cache in the RefreshCache method called by the NotifyOnChanged:
private static void RefreshCache(object state)
{
//remove from cache
WcfCache.ClearCache("Refreshed");
//resubscribe to NotifyOnChanged event
WcfCache.SetCache("Refreshed", true, CacheNames.CacheFileName);
//grab all cache data and refresh each in parallel
}
Is there a better way to do this? Is there an event I can tap into that will ALWAYS fire (instead of just the first time like this NotifyOnChanged)? This seems pretty fragile. If the HostFileChangeMonitor doesn't get added properly one time, the entire app's cache will never refresh.
Have you tried the FileSystemWatcher and handling its Change event?Here's MSDN's documentation on the subject for more detailed info.
I've just discovered this class and from what I get, the NotifyOnChanged is not meant to be fired every time a watched file is updated (even though the name lets you think otherwise).
Rather the file is constantly being watched and cached and you simply need to get it from the cache whenever you need it.

Resources