Why did Playframework 2 use a custom Scala Template Engine instead of scalas build in xml mode? - template-engine

Play 2.0 uses a custom scala based template engine that allows to use a subset of scala inside html code.
Why was this design decision made instead of using scalas build-in xml mode?
The play template engine has some disadvantages like
only a subset of scala is supported, for example it seems not to be possible to define functions inside of functions
no editor support in eclipse
On the other hand, I understand that the play scala template engine supports non-well-formed html which wouldn't be possible with scalas xml mode, but I guess it should always be possible to write the templates in a well formed way. I'm just a beginner in play and scala and would simply like to understand the context.

I think it has several answers :
It is a template. Templates are not supposed to hold complex logic. All logic manipulation has to be done in the controller/model.
Templates can have about any format you want : email, CSV, SQL and so on. Restricting a template to valid XML is really limiting the possibilities of the framework.
Everything is compilable. Even routes, assets, template inheritance, etc. In order for those mecanisms to work with the rest of the framework, some choice probably had to be made. But no one will be able to answer you better than the creators of the framework.

One reason may be the clumsy handling of attributes in xml in scala. There are two problems:
Attributes like "selected" which must either be present or not
Adding a list of attributes dynamically like the htmlArgs of the input helper templates
An example in plain scala shows the difficulties:
def addAttributes(element: Elem, attributes: Map[Symbol, _ <: Any]): Elem = {
var el = element
for ((key, value) <- attributes) el = el % Attribute(None, key.name, Text(value.toString), Null)
el
}
def multiselect[T](field: play.api.data.Field,
options: Seq[T],
optionValue: T => String,
optionText: T => String,
args: (Symbol, Any)*)(implicit handler: FieldConstructor, lang: play.api.i18n.Lang) = {
val values = { field.indexes.map { v => field("[" + v + "]").value } }
input(field, args: _*) {
(id, name, value, htmlArgs) =>
Html(
addAttributes(
<select id={ id } name={ name } multiple="multiple">
{
options.map { v =>
val z: Option[String] = if (values contains v) Some("selected") else None
<option value={ optionValue(v) } selected={ z map Text }>{ optionText(v) }</option>
}
}
</select>,
htmlArgs).toString)
}
}

Related

Defining a set of pre-populated protobuf messages?

I want to be able to supply a pre-instantiated message by name rather than populating it from scratch. For example, given the following schema:
message Animal {
required int32 num_legs = 1;
required int32 num_eyes = 2;
}
message Zoo {
repeated Animal animals;
}
I want to be able quickly to define a Zoo in my config file by choosing from a set of known animals:
// config.json
zoo: {
animals: [snake, bird]
}
where snake and bird are already defined:
// animals.json
bird: {
num_legs: 2
num_eyes: 2
}
snake: {
num_legs: 0
num_eyes: 2
}
What's the most elegant way to do this?
The protobuf API has methods to convert protobuf⬌JSON. For C++ you can use util::JsonStringToMessage, other languages have their API versions too (you didn't specify a language). Wrap this in a simple helper function and use your language's multi-line string constant syntax to embed the message in JSON format directly into your source.
To get your named predefined messages, use a language that has string interpolation. (Not native to C++, unfortunately, but here is a SO answer talking about ways you might do it.)

How can I reuse code between Javascript macros and minimize work done within the macros?

I currently have two macros that are part of a (very limited-audience) plugin I'm developing, that both look basically like:
(function(){
exports.name = "name";
exports.params = [
{name: "value"}
];
function get(tiddler) {
// return some contents of some tiddler fields according to some rule
}
function parse(data) {
// convert string to some kind of useful object
}
function logic(x, y) {
// determine whether the two objects correspond in some way
};
function format(data, title) {
// produce WikiText for a link with some additional decoration
};
exports.run = function(value) {
value = parse(value);
var result = [];
this.wiki.each(function(tiddler, title) {
var data = get(tiddler);
if (data !== undefined && logic(value, parse(data))) {
result.push(format(data, title));
}
});
return result.join(" | ");
};
})();
So they're already fairly neatly factored when considered individually; the problem is that only the core logic is really different between the two macros. How can I share the functions get, logic and format between the macros? I tried just putting them in a separate tiddler, but that doesn't work; when the macros run, TW raises an error claiming that the functions are "not defined". Wrapping each function as its own javascript macro in a separate tiddler, e.g.
(function(){
exports.name = "get";
exports.params = [
{name: "tiddler"}
];
exports.run = function(tiddler) {
// return some contents of some tiddler fields according to some rule
}
})();
also didn't help.
I'd also like to set this up to be more modular/flexible, by turning the main get/parse/logic/format process into a custom filter, then letting a normal filter expression take care of the iteration and using e.g. the widget or <> macro to display the items. How exactly do I set this up? The documentation tells me
If the provided filter operators are not enough, a developer can add
new filters by adding a module with the filteroperator type
but I can't find any documentation of the API for this, nor any examples.
How can I share the functions get, logic and format between the macros?
You can use the Common/JS standard require('<tiddler title>') syntax to access the exports of another tiddler. The target tiddler should be set up as a JS module (ie, the type field set to application/javascript and the module-type field set to library). You can see an example here:
https://github.com/Jermolene/TiddlyWiki5/blob/master/core/modules/widgets/count.js#L15
I'd also like to set this up to be more modular/flexible, by turning the main get/parse/logic/format process into a custom filter, then letting a normal filter expression take care of the iteration and using e.g. the widget or <> macro to display the items. How exactly do I set this up?
The API for writing filter operators isn't currently documented, but there are many examples to look at:
https://github.com/Jermolene/TiddlyWiki5/tree/master/core/modules/filters

Lift Record: empty value for required field but no validation errors

I've been trying to figure out how to do this without manually defining a validation but without any success so far.
I have a StringField
class Foo private() extends MongoRecord[Foo] with ObjectIdKey[Foo] {
...
object externalId extends StringField(this, 255) {
// none of these seem to have any effect on validation whatsoever:
override def optional_? = false
override def required_? = true
override def defaultValueBox = Empty
}
...
}
Now when I call .validate on a Foo, it returns no errors:
val foo = Foo.createRecord
foo.validate match {
case Nil => foo.save
...
}
...and the document is saved into the (mongo) DB with no externalId.
So the question is: is there any way at all to have Lift automatically validate missing fields without me having to manually add stuff to validations?
EDIT: am I thinking too much in terms of the type of productivity that frameworks like Django and Rails provide out of the box? i.e. things like basic and very frequent validation without having to write anything but a few declarative attributes/flags. If yes, why has Lift opted to not provide this sort of stuff out of the box? Why would anybody not want .validate to automatically take into consideration all the def required_? = true/def optional_? = false fields?
As far as I'm aware, there isn't a way for you to validate a field without explicitly defining validations. The reason that optional_? and required_? don't provide validation is that it isn't always clear what logic to use, especially for non String fields. The required_? value itself is used by Crudify to determine whether to mark the field as required in the produced UI, but it's up to you to provide the proper logic to determine that the requirement is satisfied.
Validating the field can be as easy as
override def validations = super.validations :: valMinLen(1, "Required!")
Or see the answer to your other question here for how to create a generic Required trait.

umbracoNaviHide not working in Where statement

Trying to get children of a given document type, with umbracoNavihide not set to false:
The following produces correct output.
#foreach (var child in root.Children.Where("ContentTypeAlias == \"DocumentTypehere\""))
{
if (child.umbracoNaviHide == "False")
{
continue;
}
<li>#child.Name</li>
}
This does not:
#foreach (var child in root.Children.Where("umbracoNaviHide == #0 && ContentTypeAlias == \"DocumentTypehere\"","False"))
{
<li>#child.Name</li>
}
umbracoNaviHide is not supported in Umbraco 5 as per the words of Niels Hartvig:
While these special aliases does [sic] [recte: do] a great job and are super easy to use
(albeit totally impossible to discover if you don't stumble across
docs that mention their usage), the problem with these are that
they're 'magic' strings which really is a mess (read: They're hacks
inside the core).
So they won't come back in v5 in the form we know from v4.
So, besides the Linq imitations being somewhat broken anyway, the short answer is that this (either form) shouldn't work (nor should Athul's answer).
The long answer is that you could use this property (and others like it) only if you explicitly support it as part of your Document Type. There is a feature request, though, asking for the 'built-in' umbraco... properties here should you care to follow and support it.
I would personally ask that you don't, however; as using these properties and relying on them are problematic (not least for the point mentioned by Neils himself). Make your own, dedicated properties that are aptly aliased for their task.
you can simply do it like
#foreach (var child in root.Children.Where("umbracoNaviHide != true && NodeTypeAlias == \"DocumentTypehere\" ")
{
<li>#child.Name</li>
}
You can just write for your check on umbracoNaviHide:
if (!child.umbracoNaviHide)
{
continue;
}
<li>#child.Name</li>
Just to add another approach - a few of the existing answers did not work for me - you can try this. Works for me in Umbraco 4.11
#foreach (var child in root.Children.Where(child => child.GetPropertyValue("umbracoNaviHide") == "0"))
{
..
}

What's so great about Func<> delegate?

Sorry if this is basic but I was trying to pick up on .Net 3.5.
Question: Is there anything great about Func<> and it's 5 overloads? From the looks of it, I can still create a similar delgate on my own say, MyFunc<> with the exact 5 overloads and even more.
eg: public delegate TResult MyFunc<TResult>() and a combo of various overloads...
The thought came up as I was trying to understand Func<> delegates and hit upon the following scenario:
Func<int,int> myDelegate = (y) => IsComposite(10);
This implies a delegate with one parameter of type int and a return type of type int. There are five variations (if you look at the overloads through intellisense). So I am guessing that we can have a delegate with no return type?
So am I justified in saying that Func<> is nothing great and just an example in the .Net framework that we can use and if needed, create custom "func<>" delegates to suit our own needs?
Thanks,
The greatness lies in establishing shared language for better communication.
Instead of defining your own delegate types for the same thing (delegate explosion), use the ones provided by the framework. Anyone reading your code instantly grasps what you are trying to accomplish.. minimizes the time to 'what is this piece of code actually doing?'
So as soon as I see a
Action = some method that just does something and returns no output
Comparison = some method that compares two objects of the same type and returns an int to indicate order
Converter = transforms Obj A into equivalent Obj B
EventHandler = response/handler to an event raised by some object given some input in the form of an event argument
Func = some method that takes some parameters, computes something and returns a result
Predicate = evaluate input object against some criteria and return pass/fail status as bool
I don't have to dig deeper than that unless it is my immediate area of concern. So if you feel the delegate you need fits one of these needs, use them before rolling your own.
Disclaimer: Personally I like this move by the language designers.
Counter-argument : Sometimes defining your delegate may help communicate intent better. e.g. System.Threading.ThreadStart over System.Action. So it’s a judgment call in the end.
The Func family of delegates (and their return-type-less cousins, Action) are not any greater than anything else you'd find in the .NET framework. They're just there for re-use so you don't have to redefine them. They have type parameters to keep things generic. E.g., a Func<T0,bool> is the same as a System.Predicate<T> delegate. They were originally designed for LINQ.
You should be able to just use the built-in Func delegate for any value-returning method that accepts up to 4 arguments instead of defining your own delegate for such a purpose unless you want the name to reflect your intention, which is cool.
Cases where you would absolutely need to define your delegate types include methods that accept more than 4 arguments, methods with out, ref, or params parameters, or recursive method signatures (e.g., delegate Foo Foo(Foo f)).
In addition to Marxidad's correct answer:
It's worth being aware of Func's related family, the Action delegates. Again, these are types overloaded by the number of type parameters, but declared to return void.
If you want to use Func/Action in a .NET 2.0 project but with a simple route to upgrading later on, you can cut and paste the declarations from my version comparison page. If you declare them in the System namespace then you'll be able to upgrade just by removing the declarations later - but then you won't be able to (easily) build the same code in .NET 3.5 without removing the declarations.
Decoupling dependencies and unholy tie-ups is one singular thing that makes it great. Everything else one can debate and claim to be doable in some home-grown way.
I've been refactoring slightly more complex system with an old and heavy lib and got blocked on not being able to break compile time dependency - because of the named delegate lurking on "the other side". All assembly loading and reflection didn't help - compiler would refuse to just cast a delegate() {...} to object and whatever you do to pacify it would fail on the other side.
Delegate type comparison which is structural at compile time turns nominal after that (loading, invoking). That may seem OK while you are thinking in terms of "my darling lib is going to be used forever and by everyone" but it doesn't scale to even slightly more complex systems. Fun<> templates bring a degree of structural equivalence back into the world of nominal typing . That's the aspect you can't achieve by rolling out your own.
Example - converting:
class Session (
public delegate string CleanBody(); // tying you up and you don't see it :-)
public static void Execute(string name, string q, CleanBody body) ...
to:
public static void Execute(string name, string q, Func<string> body)
Allows completely independent code to do reflection invocation like:
Type type = Type.GetType("Bla.Session, FooSessionDll", true);
MethodInfo methodInfo = type.GetMethod("Execute");
Func<string> d = delegate() { .....} // see Ma - no tie-ups :-)
Object [] params = { "foo", "bar", d};
methodInfo.Invoke("Trial Execution :-)", params);
Existing code doesn't notice the difference, new code doesn't get dependence - peace on Earth :-)
One thing I like about delegates is that they let me declare methods within methods like so, this is handy when you want to reuse a piece of code but you only need it within that method. Since the purpose here is to limit the scope as much as possible Func<> comes in handy.
For example:
string FormatName(string pFirstName, string pLastName) {
Func<string, string> MakeFirstUpper = (pText) => {
return pText.Substring(0,1).ToUpper() + pText.Substring(1);
};
return MakeFirstUpper(pFirstName) + " " + MakeFirstUpper(pLastName);
}
It's even easier and more handy when you can use inference, which you can if you create a helper function like so:
Func<T, TReturn> Lambda<T, TReturn>(Func<T, TReturn> pFunc) {
return pFunc;
}
Now I can rewrite my function without the Func<>:
string FormatName(string pFirstName, string pLastName) {
var MakeFirstUpper = Lambda((string pText) => {
return pText.Substring(0,1).ToUpper() + pText.Substring(1);
});
return MakeFirstUpper(pFirstName) + " " + MakeFirstUpper(pLastName);
}
Here's the code to test the method:
Console.WriteLine(FormatName("luis", "perez"));
Though it is an old thread I had to add that func<> and action<> also help us use covariance and contra variance.
http://msdn.microsoft.com/en-us/library/dd465122.aspx

Resources