Vaadin: how to avoid cascade valuechange events between fields - events

I have several fields in a screen, that are partially dependent each on other by validating rules.
If user changes one field then I can affect another fields using setValue(). But I am fighting with the problem, valueChange event is fired from setValue() just as from user activity.
My example: I have four fields "activity_status", "schedule_date", "start_date", "end_date". By editing any one field I want to affect another three fields (changing status, setting or shifting dates). How to avoid recursive calling valueChange method?
I can imagine a variable justProcessedField that can working as a lock, but has anybody a better hands-on solution?

Usually we do set a flag when we trigger the first valueChangeEvent() and then ignore all others, until the first trigger is finished processing.
The pseudocode looks like this:
private boolean _ignoreTriggers= false; // Set ignore triggers when we do manual setValue stuff
field1.addListener(new ValueChangeListener() {
#Override
public void valueChange(ValueChangeEvent event) {
if (!_ignoreTriggers) {
_ignoreTriggers= true;
// Do the processing and setValues(...) in the other fields
_ignoreTriggers= false;
}
}
}
);
With different booleans you can also make groups of fields "sensisble/insensible" to changes in other fields.

To avoid the ValueChangeEvents you can create custom fields, which are extentions of the fields you want to modify. This custom fields should have a public method which calls the setInternalValue method.
Example for Checkbox-Field:
public class CheckBoxSilent extends CheckBox {
/**
* Set the new value without calling a {#link ValueChangeListener}
*
* #param newValue the new value to be set.
*/
public void setValueSecretly(boolean newValue) {
setInternalValue(newValue);
markAsDirty();
}
}

Related

DDD Event Source raise event for created object

I have Category class, that has children property. When creating category, I raise event CategoryCreated in constructor, which registers this event in BaseCategory. Also I have apply method in Category, that applies events to state.
public class Category :BaseCategory
{
public Category(string id, TranslatableString name, DateTime timestamp)
{
Raise(new CategoryCreated(id, name, timestamp));
}
}
public override void Apply(DomainEvent #event)
{
switch (#event)
{
case CategoryCreated e:
this.Id = e.Id;
this.Name = e.Name;
break;
...
Now suppose I want to create Category and add child to it.
var category = new Category("1","2",DateTime.UtcNow);
category.AddChild("some category", "name", DateTime.UtcNow);
foreach(var e in category.UncomittedEvents)
{
category.Apply(e);
}
When adding child I set private property ParentId of newly created category as parent's Id.
public void AddChild(string id, string name,DateTime date)
{
if (string.IsNullOrWhiteSpace(id))
throw new ArgumentNullException(nameof(id));
if (Children.Any(a => a.Id== Id))
throw new InvalidOperationException("Category already exist ");
Raise(new CategoryAdded(Guid.NewGuid().ToString(), this.Id/*parent id*/, name, DateTime.UtcNow));
}
public class CategoryAdded : DomainEvent
{
public CategoryAdded(string id, string parentId, string name, DateTime timestamp) {}
}
The problem is, when applying events, parent id will be null because events were not applied yet and parent's Id property passed as parent id is null:
new CategoryAdded(Guid.NewGuid().ToString(), this.Id /*parent id*/, name, DateTime.UtcNow)
Where is design mistake?
Where and when should be CategoryCreated event raised?
How would you tackle this situation?
Where is design mistake? Where and when should be CategoryCreated event raised? How would you tackle this situation?
OK, this is not your fault. The literature sucks.
CPearson's answer shows a common mechanism for fixing the symptoms, but I think it is important to see what is going on.
If we are applying the "event sourcing" pattern in its pure form, our data model would look like a stream of events:
class Category {
private final List[Event] History;
}
Changes to the current state would be achieved by appending events to the History.
public Category(string id, TranslatableString name, DateTime timestamp) {
History.Add(new CategoryCreated(id, name, timestamp));
}
And queries of the current state would be methods that would search through the event history looking for data.
public Id Id() {
Id current = null;
History.forEach( e -> {
if (e instance of CreatedEvent) {
current = CreatedEvent.Id(e)
}
});
return current
}
The good news is that the design is relatively simple in principle. The bad news is that the performance is dreadful - reading is usually much more common and writing, but every time we want to read something, we have to go skimming through the events to find the answer.
It's not always that bad -- properties that are constant for the entire life cycle of the entity will normally appear in the first event; to get the most recent version of a property you can often enumerate the history backwards, and stop on the first (most recent) match.
But it is still pretty awkward. So to improve query performance we cache the interesting results in properties -- effectively using a snapshot to answer queries. But for that to work, we need to update the cached values (the snapshot) when we add new events to the history.
So the Raise method should be doing two things, modifying the event history, and modifying the snapshot. Modifying the event history is general purpose, so that work often gets shared into a common base class; but the snapshot is specific to the collection of query results we want to cache, so that bit is usually implemented within the "aggregate root" itself.
Because the snapshot when we restore the aggregate from the events stored in our database should match the live copy, this design often includes an Apply method that is used in both settings.
Where is design mistake?
Your Raise(...) method should also call Apply. Remember that your Aggregate is responsible for maintaining a consistent state. Applying events outside of your Aggregate violates that principle.
protected void Raise(DomainEvent #event)
{
this.Apply(#event);
this.UncomittedEvents.Add(#event);
}

Web API validation error

I have a View Model called SignUp with the EmailAddress property set like this:
[Required]
[DuplicateEmailAddressAttribute(ErrorMessage = "This email address already exists")]
public string EmailAddress { get; set; }
and the custom validator looks like this:
public class DuplicateEmailAddressAttribute : ValidationAttribute
{
public override bool IsValid(object value)
{
PestControlContext _db = new PestControlContext();
int hash = value.ToString().GetHashCode();
if (value == null)
{
return true;
}
if (_db.Users.Where(x => x.EmailAddressHash == hash).Count() > 0)
return false;
else
return true;
}
}
The problem I'm having is that if the user leaves the email address field blank on the sign up form the application is throwing a null reference exception error (I think it's looking for "" in the database and can't find it). What I don't understand is why this isn't being handled by the Required attribute - why is it jumping straight into the custom validator?
The Required attribute would have resulted in an error being added to the model state. It will not short-circuit the execution though. The framework continues to run other validators for the simple reason that all the errors about the request need to be sent out in a single shot. Ideally, you wouldn't want the service to say something is wrong to start with and when the user re-submits the request after making a correction, the service comes back and say some other thing is wrong and so on. It will be an annoyance, I guess.
The NullReferenceException is thrown because value.ToString() is called before the check against null. As you need the hash variable only after the check, you can solve this by reordering the statements:
if (value == null)
{
return true;
}
int hash = value.ToString().GetHashCode();
In addition, you could also move the PestControlContext after the check against null and use a using statement to dispose of it properly.
As also #Baldri pointed out, each validator can add Error messages and all of them are run, even if a previous one already signaled the data to be invalid. Furthermore, I'd not rely on that the validations are run in the order that you specify when marking the property with the attributes (some frameworks implement their own attribute ordering mechanism in order to assert that the order is deterministic, e.g. priorities or preceding attributes).
Therefore, I suggest reordering the code in the custom validator is the best solution.

CellTable keep TextInputCell old value

I use a CellTable with EditTextCell
When the EditTextCell fire the FieldUpdater, I want to do a validation and set the EditTextCell to the old value if validation fail. But I cant find how to update the CellTable or the specified row.
Here a piece of code:
titleColumn.setFieldUpdater(new FieldUpdater<QuestionDto, String>() {
#Override
public void update(int index, QuestionDto object, String value) {
if (!isValid(value))
// Here I need to set the EditTextCell to the value in my object
else
// It's valid I do the work
}
});
I was looking for something like : ((EditTextCell)titleColumn.getCell(index)).setValue(object.getTitle());
The other solution is to reset all the CellTable like that:
table.setRowData(dataProvider.getList());
But it's don't work too.
I'm not very knowledgeable of EditTextCell but for other widgets I would catch the ChangeEvent (is it possible to catch it the cell you're using ?) then call event.stopPropagation() if I don't want the user action to have any effect.

IList with an implicit sort order

I'd like to create an IList<Child> that maintains its Child objects in a default/implicit sort order at all times (i.e. regardless of additions/removals to the underlying list).
What I'm specifically trying to avoid is the need for all consumers of said IList<Child> to explicitly invoke IEnumerable<T>.OrderBy() every time they want to enumerate it. Apart from violating DRY, such an approach would also break encapsulation as consumers would have to know that my list is even sorted, which is really none of their business :)
The solution that seemed most logical/efficient was to expose IList<Child> as IEnumerable<Child> (to prevent List mutations) and add explicit Add/Remove methods to the containing Parent. This way, I can intercept changes to the List that necessitate a re-sort, and apply one via Linq:
public class Child {
public string StringProperty;
public int IntProperty;
}
public class Parent{
private IList<Child> _children = new List<Child>();
public IEnumerable<Child> Children{
get
{
return _children;
}
}
private void ReSortChildren(){
_children = new List<Child>(child.OrderBy(c=>c.StringProperty));
}
public void AddChild(Child c){
_children.Add();
ReSortChildren()
}
public void RemoveChild(Child c){
_children.Remove(c);
ReSortChildren()
}
}
Still, this approach doesn't intercept changes made to the underlying Child.StringProperty (which in this case is the property driving the sort). There must be a more elegant solution to such a basic problem, but I haven't been able to find one.
EDIT:
I wasn't clear in that I would preferable a LINQ compatible solution. I'd rather not resort to using .NET 2.0 constructs (i.e. SortedList)
What about using a SortedList<>?
One way you could go about it is to have Child publish an event OnStringPropertyChanged which passes along the previous value of StringProperty. Then create a derivation of SortedList that overrides the Add method to hookup a handler to that event. Whenever the event fires, remove the item from the list and re-add it with the new value of StringProperty. If you can't change Child, then I would make a proxy class that either derives from or wraps Child to implement the event.
If you don't want to do that, I would still use a SortedList, but internally manage the above sorting logic anytime the StringProperty needs to be changed. To be DRY, it's preferable to route all updates to StringProperty through a common method that correctly manages the sorting, rather than accessing the list directly from various places within the class and duplicating the sort management logic.
I would also caution against allowing the controller to pass in a reference to Child, which allows him to manipulate StringProperty after it's added to the list.
public class Parent{
private SortedList<string, Child> _children = new SortedList<string, Child>();
public ReadOnlyCollection<Child> Children{
get { return new ReadOnlyCollection<Child>(_children.Values); }
}
public void AddChild(string stringProperty, int data, Salamandar sal){
_children.Add(stringProperty, new Child(stringProperty, data, sal));
}
public void RemoveChild(string stringProperty){
_children.Remove(stringProperty);
}
private void UpdateChildStringProperty(Child c, string newStringProperty) {
if (c == null) throw new ArgumentNullException("c");
RemoveChild(c);
c.StringProperty = newStringProperty;
AddChild(c);
}
public void CheckSalamandar(string s) {
if (_children.ContainsKey(s))
var c = _children[s];
if (c.Salamandar.IsActive) {
// update StringProperty through our method
UpdateChildStringProperty(c, c.StringProperty.Reverse());
// update other properties directly
c.Number++;
}
}
}
I think that if you derive from KeyedCollection, you'll get what you need. That is only based on reading the documentation, though.
EDIT:
If this works, it won't be easy, unfortunately. Neither the underlying lookup dictionary nor the underlying List in this guy is sorted, nor are they exposed enough such that you'd be able to replace them. It might, however, provide a pattern for you to follow in your own implementation.

Using DataObjectTypeName in DataObjectSource

The functionality I am trying to use is:
- Create a ObjectDataSource for selection and updating controls on a web page (User Control).
- Use the DataObjectTypeName to have an object created that would send the data to an UpdateMethod.
- Before the values are populated in the DataObjectTypeName’s object, I would like to pre-populate the object so the unused items in the class are not defaulted to zeros and empty strings without me knowing whether the zero or default string was set by the user or by the application.
I cannot find a way to pre-populate the values (this was an issue back in 2006 with framework 2.0). One might ask “Why would anyone need to pre-populate the object?”. The simple answer is: I want to be able to randomly place controls on different User Controls and not have to be concerned with which UpdateMethod needs to handle which fields of an object.
For Example, let’s say I have a class (that reflects a SQL Table) that includes the fields: FirstName, LastName, Address, City, State, Zip. I may want to give the user the option to change the FirstName and LastName and not even see the Address, City, State, Zip (or vice-versa). I do not want to create two UpdateMethods where one handled FirstName and LastName and the other method handles the other fields. I am working with a Class of some 40+ columns from multiple tables and I may want some fields on one screen and not another and decide later to change those fields from one screen to another (which breaks my UpdateMethods without me knowing).
I hope I explained my issue well enough.
Thanks
This is hardly a solution to the problem, but it's my best stab at it.
I have a GridView with its DataSourceID set to an ObjectDataSource.
Whenever a row is updated, I want the property values in the object to be selectively updated - that is - only updated if they appear as columns in the GridView.
I've created the following extension:
public static class GridViewExtensions
{
public static void EnableLimitUpdateToGridViewColumns(this GridView gridView)
{
_gridView = gridView;
if (_gridView.DataSourceObject != null)
{
((ObjectDataSource)_gridView.DataSourceObject)
.Updating += new ObjectDataSourceMethodEventHandler(objectDataSource_Updating);
}
}
private static GridView _gridView;
private static void objectDataSource_Updating(object sender, ObjectDataSourceMethodEventArgs e)
{
var newObject = ((object)e.InputParameters[0]);
var oldObjects = ((ObjectDataSource)_gridView.DataSourceObject).Select().Cast<object>();
Type type = oldObjects.First().GetType();
object oldObject = null;
foreach (var obj in oldObjects)
{
if (type.GetProperty(_gridView.DataKeyNames.First()).GetValue(obj, null).ToString() ==
type.GetProperty(_gridView.DataKeyNames.First()).GetValue(newObject, null).ToString())
{
oldObject = obj;
break;
}
}
if (oldObject == null) return;
var dynamicColumns = _gridView.Columns.OfType<DynamicField>();
foreach (var property in type.GetProperties())
{
if (dynamicColumns.Where(c => c.DataField == property.Name).Count() == 0)
{
property.SetValue(newObject, property.GetValue(oldObject, null), null);
}
}
}
}
And in the Page_Init event of my page, I apply it to the GridView, like so:
protected void Page_Init()
{
GridView1.EnableLimitUpdateToGridViewColumns();
}
This is working well for me at the moment.
You could probably apply similar logic to other controls, e.g. ListView or DetailsView.
I'm currently scratching my head to think of a way this can be done in a rendering-agnostic manner - i.e. without having to know about the rendering control being used.
I hope this ends up as a normal feature of the GridView or ObjectDataSource control rather than having to hack it.

Resources