Using invariant with Dexterity form and fieldsets - validation

I have a content type derived from plone.directives.form.Schema; it has several dozen fields across four fieldsets. I'm trying to create a zope.interface.invariant that looks at fields from two different fieldsets.
From tracing the behaviour, it looks like the invariant is called once for each fieldset, but not for the entire form.
I'm aware I can provide my own handler and perform all the checks I need there, but that feels chunky compared to distinctly defined invariants. While the obvious solution is to move related fields onto the same fieldset, the current setup reflects a layout that is logical the end user.
Is there an existing hook where I could perform validation on multiple fields across fieldsets?

The answer seems to be no: z3c.form.group.Group.extractData calls z3c.form.form.BaseForm.extractData once for each group/fieldset, and this call already includes invariant validation.
Instead of registering your own handler, you could also overwrite extractData:
from plone.directives import form, dexterity
from z3c.form.interfaces import ActionExecutionError,WidgetActionExecutionError
# ...
class EditForm(dexterity.EditForm):
grok.context(IMyEvent)
def extractData(self, setErrors=True):
data, errors = super(EditForm, self).extractData(setErrors)
if not None in(data['start'], data['end']):
if data['end'] < data['start']:
raise WidgetActionExecutionError('end', Invalid(_(u"End date should not lie before the start date.")))
if data['end'] - data['start'] > datetime.timedelta(days=7):
raise WidgetActionExecutionError('end', Invalid(_(u"Duration of convention should be shorter than seven (7) days.")))
return data, errors
Please note that this class derives from dexterity.EditForm, which includes Dexterity's default handlers, instead of form.SchemaForm.
WidgetActionExecutionError does not work reliably, though. For some fields, it produces a 'KeyError'.

Related

django_filter Django Rest Framework (DRF) handling query_params get vs getlist

Is it correct that filtering by a list of values remains overly complicated in django_filter and, by extention, the Django Rest Framework (DRF) or am I missing something simple?
Specifically, Must we specify a-priori the parameters that should be handled with 'getlist' rather than with 'get' when we process a multi-value parameter submitted via URL like below:
http://example.com?country=US&country=CN
Now, when we process this request,
request.query_params.get('country') as well as request.query_params['country'] will erroneously return 'CN' while request.query_params.getlist('country') will return ['US','CN'], which is what we wish in this example.
So my current logic dictates that I have to specify in advance which parameters may have multiple values specified, as I don't know of an internal method that handles this transparently:
So to handle this I currently do something like this (skipping all the actual request processing, filtering, etc. for the sake of simplicity):
class CountryFilter(filters.FilterSet):
""" placeholder: this is a separate question as to why I can't get any of these to work as desired. """
#country = filters.CharFilter(method='filter_country')
#country = filters.CharFilter(name='country', lookup_expr='in')
#country = filters.MultipleChoiceFilter()
class CountryViewSet(viewsets.ModelViewSet):
"""Viewset to work with countries by ISO code"""
def list(self, request, *args, **kwargs):
"""do something based on a request for one or more countries
such as http://example.com?country=CA&country=CN&country=BR
"""
query_params = request.query_params
filter0 = CountryFilter(query_params, queryset=Country.objects.all())
# if filter0 worked, we'd be done and wouldn't need below
# but it doesn't
query_dict = {}
params_to_list = ['country', 'another_param']
for k in query_params.keys():
if k in params_to_list:
query_dict[k] = query_params.getlist(k)
else:
query_dict[k] = query_params.get(k)
# use the newly counstructed query_dict rather than query_params
filter1 = CountryFilter(query_dict, queryset=Country.objects.all())
import pdb; pdb.set_trace()
return HttpResponse('ok')
In theory, filter1 should give me the desired results while filter0 will not.
Code based on this works in my production environment but has some other effects like turning the valid list back into a string. I don't want to drift into that question here, just wondering if this is what people do or not?
Specifically, Must we specify a-priori the parameters that should be handled with 'getlist' rather than with 'get' when we process a multi-value parameter submitted via URL like below:
This is not necessary. Filter instances construct a form field that in turn performs the query param validation. These fields are standard Django form fields, and its widget will determine whether or not to use .getlist() or .get() when extracting values from the query dict. You do not need to preprocess the query dict.
In your example, filter0 and filter1 should actually behave identically, since QueryDict('a=1&a=2') is functionally equivalent to dict(a=['1', '2']) for SelectMultiple widgets. Its .value_from_datadict() will call .getlist() on the former and .get() on the latter - both will return a list of values.
As to solving your issue of filtering by multiple values, you have two options.
Use a MultipleChoiceFilter (docs)
Use a CSV-based filter (docs)
Multiple choice filters render a select multiple widget and expects a querystring like ?a=1&a=b. You attempted this above, however you omitted choices. Without choices, the html select will be rendered without options, and validation will subsequently fail (since it uses those same options to validate input). Alternatively, you can use ModelMultipleChoiceFilter, which requires a queryset to construct its choices from. Note that the queryset/choice handling is builtin behavior of the underlying form field, and isn't specific to django-filter.
For the latter option, there are more details in the docs linked above, but it's worth mentioning that it renders a text input and expects a querystring like ?a=1,2 instead of ?a=1&a=2.

Xtext disable validation from imported metamodel

I would like to know if it is possible to disable the validation for a subset of modelelements which are specified in the metamodel.
The problem is that I'm getting some validation-errors from the Xtexteditor while writting my dsl-file. So my idea is to disable the validation for exactly this modelelement.
I try to build a real simple textual notation and want to serialize the (valid) model while saving the file. The saved model is modified during the saving process, so it is
valid at the end.
Regards,
Alex
Lets beginn with the grammer:
I'am working on an imported metamodel (UML2):
import "http://www.eclipse.org/uml2/4.0.0/UML"
Then I create all the necessary parserules to define a classdiagram. In my case the problem appears
in the parserrule for associations between classes:
AssociationClass_Impl returns AssociationClass:
{AssociationClass} 'assoc' name=ID'{'
(ownedAttribute+=Property_Impl)*
'}';
And of course the parserrule for properties:
Property_Impl returns Property:
name=ID ':' type=[Type|QualifiedName]
(association=[AssociationClass|QualifiedName])?
;
Now some words to the problem itself. While editing the xtext-file in the xtexteditor of the runtime eclipse, the build model is validated. The problem is here that the metamodel itself has several constraints for an AssociationClass (screenshot not possible yet ):
Multiple markers at this line
- The feature 'memberEnd' of 'org.eclipse.uml2.uml.internal.impl.AssociationClassImpl#142edebe{platform:/resource/aaa/test.mydsl#//Has}'
with 0 values must have at least 2 values
- The feature 'relatedElement' of 'org.eclipse.uml2.uml.internal.impl.AssociationClassImpl#142edebe{platform:/resource/aaa/test.mydsl#//Has}'
with 0 values must have at least 1 values
- The feature 'endType' of 'org.eclipse.uml2.uml.internal.impl.AssociationClassImpl#142edebe{platform:/resource/aaa/test.mydsl#//Has}'
with 0 values must have at least 1 values
- An association has to have min. two ownedends.
And now I wanted to know if it is possible to disable the validation for exactly this modelelement. So I can hide the errorinformation from the user. Because I want to serialize the created xtextmodel in the next step and will do some modeltransformations.
Seems like UML registers this validator in the global singleton registry. So you basically need to avoid using the registry. You can do that by binding a different element in your runtime modul:
public EValidator.Registry bindEValidatorRegistry() {
// return an empty one as opposed to EValidator.Registry.INSTANCE
return new org.eclipse.emf.ecore.impl.ValidationDelegateRegistryImpl();
}

Some problems with MapperExtension of sqlalchemy

There are two classes: User and Question
A user may have many questions, and it also contains a question_count
to record the the count of questions belong to him.
So, when I add a new question, I want update the question_count of the
user. At first, I do as:
question = Question(title='aaa', content='bbb')
Session.add(question)
Session.flush()
user = question.user
### user is not None
user.question_count += 1
Session.commit()
Everything goes well.
But I wan't to use event callback to do the same thing. As following:
from sqlalchemy.orm.interfaces import MapperExtension
class Callback(MapperExtension):
def after_insert(self, mapper, connection, instance):
user = instance.user
### user is None !!!
user.question_count += 1
class Question(Base):
__tablename__ = "questions"
__mapper_args__ = {'extension':Callback()}
....
Note in the "after_insert" method:
instance.user # -> Get None!!!
Why?
If I change that line to:
Session.query(User).filter_by(id=instance.user_id).one()
I can get the user successfully, But: the user can't be updated!
Look I have modified the user:
user.question_count += 1
But there is no 'update' sql printed in the console, and the
question_count are not updated.
I try to add Session.flush() or Session.commit() in the
after_insert() method, but both cause errors.
Is there any important thing I'm missing? Please help me, thank you
The author of sqlalchemy gave me an useful answer in a forum, I copy it here:
Additionally, a key concept of the
unit of work pattern is that it
organizes a full list of all
INSERT,UPDATE, and DELETE statements
which will be emitted, as well as the
order in which they are emitted,
before anything happens. When the
before_insert() and after_insert()
event hooks are called, this structure
has been determined, and cannot be
changed in any way. The
documentation for before_insert() and
before_update() mentions that the
flush plan cannot be affected at this
point - only individual attributes on
the object at hand, and those which
have not been inserted or updated yet,
can be affected here. Any scheme
which would like to change the flush
plan must use
SessionExtension.before_flush.
However, there are several ways of
accomplishing what you want here
without modifiying the flush plan.
The simplest is what I already
suggested. Use
MapperExtension.before_insert() on the
"User" class, and set
user.question_count =
len(user.questions). This assumes
that you are mutating the
user.questions collection, rather than
working with Question.user to
establish the relationship. If you
happened to be using a "dynamic"
relationship (which is not the case
here), you'd pull the history for
user.questions and count up what's
been appended and removed.
The next way, is to do pretty much
what you think you want here, that is
implement after_insert on Question,
but emit the UPDATE statement
yourself. That's why "connection" is
one of the arguments to the mapper
extension methods:
def after_insert(self, mapper, connection, instance):
connection.execute(users_table.update().\
values(question_count=users_table.c.question_count +1).\
where(users_table.c.id==instance.user_id))
I wouldn't prefer that approach since
it's quite wasteful for many new
Questions being added to a single
User. So yet another option, if
User.questions cannot be relied upon
and you'd like to avoid many ad-hoc
UPDATE statements, is to actually
affect the flush plan by using
SessionExtension.before_flush:
class
MySessionExtension(SessionExtension):
def before_flush(self, session, flush_context):
for obj in session.new:
if isinstance(obj, Question):
obj.user.question_count +=1
for obj in session.deleted:
if isinstance(obj, Question):
obj.user.question_count -= 1
To combine the "aggregate" approach of
the "before_flush" method with the
"emit the SQL yourself" approach of
the after_insert() method, you can
also use SessionExtension.after_flush,
to count everything up and emit a
single mass UPDATE statement with many
parameters. We're likely well in the
realm of overkill for this particular
situation, but I presented an example
of such a scheme at Pycon last year,
which you can see at
http://bitbucket.org/zzzeek/pycon2010/src/tip/chap5/sessionextension.py
.
And, as I tried, I found we should update the user.question_count in after_flush
user, being I assume a RelationshipProperty, is only populated after the flush (as it is only this point the ORM knows how to relate the two rows).
It looks like question_count is actually a derived property, being the number of Question rows for that user. If performance is not a concern, you could use a read-only property and let the mapper do the work:
#property
def question_count(self):
return len(self.questions)
Otherwise you're looking at implementing a trigger, either at the database-level or in python (which modifies the flush plan so is more complicated).

Avoiding duplicate code in input validation

Suppose you have a subsystem that does some kind of work. It could be anything. Obviously, at the entry point(s) to this subsystem there will be certain restrictions on the input. Suppose this subsystem is primarily called by a GUI. The subsystem needs to check all the input it recieves to make sure it's valid. We wouldn't want to FireTheMissles() if there was invalid input. The UI is also interested in the validation though, because it needs to report what went wrong. Maybe the user forgot to specify a target or targetted the missles at the launchpad itself. Of course, you can just return a null value or throw an exception, but that doesn't tell the user SPECIFICALLY what went wrong (unless, of course, you write a separate exception class for each error, which I'm fine with if that's the best practice).
Of course, even with exceptions, you have a problem. The user might want to know if input is valid BEFORE clicking the "Fire Missles!" button. You could write a separate validation function (of course IsValid() doesn't really help much because it doesn't tell you what went wrong), but then you'll be calling it from the button click handler and again from the FireTheMissles() function (I really don't know how this changed from a vague subsystem to a missle-firing program). Certainly, this isn't the end of the world, but it seems silly to call the same validation function twice in a row without anything having changed, especially if this validation function requires, say, computing the hash of a 1gb file.
If the preconditions of the function are clear, the GUI can do its own input validation, but then we're just duplicating the input validation logic, and a change in one might not be reflected in the other. Sure, we may add a check to the GUI to make sure the missle target is not within an allied nation, but then if we forget to copy it to the FireTheMissles() routine, we'll accidentally blow up our allies when we switch to a console interface.
So, in short, how do you achieve the following:
Input validation that tells you not just that something went wrong, but what specifically went wrong.
The ability to run this input validation without calling the function which relies on it.
No double validation.
No duplicate code.
Also, and I just thought of this, but error messages should not be written in the FireTheMissles() method. The GUI is responsible for picking appropriate error messages, not the code the GUI is calling.
"The subsystem needs to check all the input it receives to make sure it's valid"
Think of the inputs not so much as a list of arguments, but as a message, it gets easier after that.
The message class has an IsValid member function, it remembers if IsValid was called and what the result was. It also remembers its state, if the state changes then it needs to be re validated. This message class also keeps a list of validation errors.
Now, the UI builds a TargetMissiles message, and the UI can validate it, or pass it directly to the MissileFiring subsystem, it checks to see if the message was validated, if not it validates it, and proceeds / fails depending.
The UI gets the message back, with the list of validations already populated.
The messages with their validation sit in a separate library. No code is duplicated.
This sound OK?
This is what Model-View-Controller is all about.
You build up a model (a launch which is composed of coordinates, missile types and number of missiles) and the model has a validate method which returns a list of errors/warnings. When you update the model (on key-up, <ENTER>, button-press) you call the validate method and show the user any warnings, errors, etc. (Eclipse has a little area just under the tools bar in a dialog that does this, you might want to look at that.)
When the model is valid, you activate the launch missiles button so that the user knows that they can launch the missiles. If you have an update event that is called particularly frequently or a part of the validation that is particularly costly, you can have a validate_light method on the model that you use for validating only the parts that are easy to do.
When you switch to a console based UI you build up the model from the command line arguments, call the same validate method (and report errors to stderr) and then launch the missiles.
Double the validation. In many case the validation is trebled (FKs and not null fields in the DB for example). Depending on your platform it may be possible to code the validation rules once. For example your front end and backend code could share C# business classes. Alternatively you could store the validation rule as metadata that both the backend and front end can access an apply.
In reality the fact that you need different responses to a validation problem (for example the Fire Missile button shouldn't even be enabled until the other inputs are valid) there will be different code associated with the same rule.
I'd suggest an input validation class, which takes the input type (an enumeration) in its' constructor, and provides a public IsValid method.
The IsValid method should return a boolean TRUE for valid and FALSE for invalid. It should also have an OUT parameter that takes a string and assigns a status message to that string. The caller will be free to ignore that message if it wants to, or report it up to the GUI if that's appropriate for the context.
So, in pseudocode (forgive the Delphi-like syntax, but it should be readable to anybody):
//different types of data we might want to validate
TValidationType = (vtMissileLaunchCodes, vtFirstName,
vtLastName, vtSSN);
TInputValidator = class
public
//call the constructor with the validation type
constructor Create(ValidationType: TValidationType);
//this should probably be ABSTRACT, implemented by descendants
//if you took that approach, then you'd have 1 descendant class
//for each validation type, instead of an enumeration
function IsValid(InputData: string; var msg: string): boolean;
And then to use it, you'd do something like this:
procedure ValidateForm;
var
validator: TInputValidator;
begin
validator := TInputValidator.Create(vtSSN);
if validator.IsValid(edtSSN.Text,labelErrorMsg.Text) then
SaveData; //it's valid, so save it!
//if it wasn't valid, then the error msg is in the GUI in "labelErrorMsg".
end;
Each piece of data has its own meta data (type, format, unit, mask, range etc.). Unfortunately this is not always specified.
The GUI controlls need to check the input with the metadata and give warnings/errors if the data is invalid.
Example: a number has a range. The range is provided by the metadata, but the range check is provided by the control.

Silverlight 3 Validation using Prism

I'm developing a SL3 application with Prism. I need to have support for validation (both field level (on the setter of the bound property) and before save (form level)), including a validation summary, shown when the save button is pressed.
But the samples I can find googling are either SL3 with a lot of code in code behind (very uncool and un-Prismy), or WPF related.
Does anyone know a reference application with some actual validation I can look into?
Cheers,
Ali
There aren't any from Microsoft at present, but I'll pass this one onto the PRISM team tomorrow to see if we can get a basic Form Validation example inside the next rev of PRISM.
That being said, you can put a validator per Form that essentially validates each field (semantic and/or syntax validation) and should all pass, will return a true/false state.
A way I typically do this is I attach a "CanSave" method to my Commands ie:
SaveOrderCommand = new DelegateCommand<object>(this.Save, this.CanSave);
private bool CanSave(object arg)
{
return this.errors.Count == 0 && this.Quantity > 0;
}
Then in the this.CanSave, i then put either the basic validation inside this codebase, or I call a bunch of other validators depending on the context - some would be shared across all modules (ie IsEmailValid would be one Validator i place in my Infrastructure Module as a singleton and pass in my string, it would then true/false as a result). Once they all pass, ensure CanSave returns true. If they fail, the CanSave will return False.
Now if they fail and you want to trigger a friendly reminder to the user that its failed theres a number of techniques you can use here. I've typically flagged the said control at validation as being "failed".. (i wrote my own mind you, so up to you which toolkits you could use here - http://www.codeplex.com/SilverlightValidator is a not bad one).
Now, I typically like to do more with Forms that have validation on them by not only highlighting the said control (red box, icon etc) but also explain to the user in more detail whats required of them - thus custom approach is a solution I've opted for.
At the end of the day, you're going to have to do some of the heavy lifting to validate your particular form - but look into ways to re-use validators where they make sense (email, SSN etc are easy ones to re-use).
HTH?
Scott Barnes - Rich Platforms Product Manager - Microsoft.

Resources