Custom asType[T] for a type T with Scalatra commands - validation

Our database (which uses slick datamapper pattern, so our models are simple case classes which is ideal for commands) has many ADT's as fields (i.e. abstract sealed class with many case classes/case objects as children), and the idea is that our parameters (which is obviously going to be a string from within the params object) is going to converted to one of these types with a custom method
How would you go about creating a custom asType[T] for some type T from the string that you get from params when using Scalatra commands?
For more info on Scalatra commands, you can read here http://www.scalatra.org/guides/formats/commands.html

Related

Groovy pass request params between classes

If I want to handle many parameters from for example a web request and pass it between classes (layers) - what is the preferred way?
I know it is easy to pass optional numbers of parameters through the constructor as a map.
I can also pass a map directly and if the keys match the receiving objects property names it should work in a similar way
Or I could just pass the map and then instantiate for example domain classes from that
I could use a special class as data carrier with given number of properties
I have a domain class (not database domain but business domain) that needs data from the user interface.
What is the best way to pass data through the layers and how do I know that all required data is being passed if using a data structure - like a map - with key values? If I would have a more static constructor with a given number of parameters, then I would know that the parameters are being passed. But how do I secure this when using a more dynamic approach? With unit tests?
Well in Grails command objects are an excellent choice. You can pass them up to various layers without issues. They are pretty analogous to domain classes, only without the whole persistence functionality.
Otherwise I would recommend using plain old Groovy classes (POGOs). Groovy allows you to keep your code very short (compared to Java and many other languages as well) and offers very handy transforms for common design patterns you might need (e.g. Canonical, Immutable, IndexedProperty, DelegatesTo...).
Compared to command objects POGOs do require you to write e.g. validation code by yourself, but this can be as simple as
boolean isValid() {
name && lastName && countryCode in ['US', 'CA']
}
You can keep static factories in a POGO to help you construct them in the various circumstances. Plus you can define more than one class in a file so you can keep the POGO code wherever it makes most sense. I would definitely prefer this approach to simple maps because the code is better encapsulated, POGOs can be unit tested & documented.

MongoDB type inference using _class

I've been reading the MongoDB documentation and Spring adds a _class field by default to the stored data. Is there any way to use this information to have type inference?
For example: There is a an abstract class Animal with three subclasses Dog, Cat, Bird. Say you have a class Zoo which contains a list of animals. In the database you store those Zoo Objects. Is there any function to get a List<Animal> back with Animals that can be upcasted?
I'm using Spring so I prefer to have a solution that would work using the spring-data-mongodb. But an external mapping library would be fine too. I prefer not to write it myself as it seems basic mapping functionality.
Make sure you map all types you mentioned to be stored in the same collection (e.g. using the #Document annotation). Then you can simply execute queries against the collection handing in Animal to the according method on MongoTemplate. The underlying converter will then automatically instantiate the correct types based on the information stored in _class. The same applies to the usage of Spring Data MongoDB repositories.

Spring BatchSqlUpdate vs NamedParameterJdbcTemplate using named parameters

I have been using the BatchSqlUpdate class successfully for a while. The only annoyance using it is that named parameters need to be registered before running any query using the declareParameter or setParameter methods. This means that the type of the parameters has to be declared as well. However, Spring also provides a NamedParameterJdbcTemplate class which has a very convenient batchUpdate method that takes named parameters as input (an array of maps or SqlParameterSource objects) without the need of declaring them first. On top of that, this class can be reused easily and I also believe it's thread-safe.
So I have a couple of questions about this:
What's the recommended way to perform (multiple) batch updates?
Why is this feature duplicated across two different classes that also behave differently?
Why does BatchSqlUpdate require declared parameters if NamedParameterJdbcTemplate does not?
Thanks for the thoughts!
Giovanni
After doing some research, I reached the following conclusions.
First of all, I realized that the NamedParameterJdbcTemplate class is the only one accepting named parameters for batch updates. The method batchUpdate(String sql,Map[] batchValues) was added in Spring 3 to achieve this.
The BatchSqlUpdate class contains an overridden update(Object... params) method that adds the given statement parameters to the queue rather than executing them immediately, as stated in the javadoc. This means that the statements will be executed only when the flush() method is called or the batch size exceeded the maximum. This classed doesn't support named parameters, although it contains a updateByNamedParam() method inherited from SqlUpdate. This is unfortunate since this method allows reuse of the same map for the named parameters, whereas the NamedParameterJdbcTemplate.batchUpdate() method requires an array of maps, with the associated overhead, code bloating and complications in reusing the array of maps if the batch size is variable.
I think it would be useful to have an overridden version of updateByNamedParam(Map paramMap) in BatchSqlUpdate to behave just like update(Object... params) but with added support for named parameters.

Best way to validate and extend constructor parameters in Scala 2.10

I want to have a class that has a number of fields such as String, Boolean, etc and when the class is constructed I want to have a fieldname associated with each field and verify the field (using regex for strings). Ideally I would just like specify in the constructor that the parameter needs to meet certain criteria.
Some sample code of how :
case class Data(val name: String ..., val fileName: String ...) {
name.verify
// Access fieldName associated with the name parameter.
println(name.fieldName) // "Name"
println(fileName.fieldName) // "File Name"
}
val x = Data("testName", "testFile")
// Treat name as if it was just a string field in Data
x.name // Is of type string, does not expose fieldName, etc
Is there an elegant way to achieve this?
EDIT:
I don't think I have been able to get across clearly what I am after.
I have a class with a number of string parameters. Each of those parameters needs to validated in a specific way and I also want to have a string fieldName associated with each parameter. However, I want to still be able to treat the parameter as if it was just a normal string (see the example).
I could code the logic into Data and as an apply method of the Data companion object for each parameter, but I was hoping to have something more generic.
Putting logic (such as parameter validation) in constructors is dubious. Throwing exceptions from constructors is doubly so.
Usually this kind of creational pattern is best served with one or more factory methods or a builder of some sort.
For a basic factory, just define a companion with the factory methods you want. If you want the same short-hand construction notation (new-free) you can overload the predefined apply (though you may not replace the one whose signature matches the case class constructor exactly).
If you want to spare your client code the messiness of dealing with exceptions when validation fails, you can return Option[Data] or Either[ErrorIndication, Data] instead. Or you can go with ScalaZ's Validation, which I'm going to arbitrarily declare to be beyond the scope of this answer ('cause I'm not sufficiently familiar with it...)
However, you cannot have instances that differ in what properties they present. Not even subclasses can subtract from the public API. If you need to be able to do that, you'll need a more elaborate construct such as a trait for the common parts and separate case classes for the variants and / or extensions.

grails - I need to define my validation at runtime

I have an idea to read an XML document from the database and generate simple CRUD screens (via Grails) based on the data defined. My application will call RESTFul services to persist the data so I don't need Hibernate on the client side. I have ideas about how to generate the UI but where I'm stumped is in how to perform the validation.
I'll have a single, generic domain/command object that contains only the fields that are common for all instances of this "runtime" data type. All other fields are defined via the XML found in the database. I need something like this:
String xml // defines the fields, constraints, UI information for this data type
def constraints = {
callMyCustomValidator(obj)
}
and in my callMyCustomValidator method, I'll extract the xml for obj and perform my validation as needed.
Note: We have a working example of this in a different app (written in java/servlers/jsp) and without any formal "framework" this isn't difficult to do. Why do I need this? We need to add simple datatypes on the fly (via script) without a release.
You can use the validator to add custom validation to your domain class. Just add this to some of your common fields.

Resources