Difference between login.event and logout.event in liferay - events

I hooked up these two events with one class and my question is How can I recognize when the class is called by login.event and when by logout.event.
My class extends Action.

The easiest way that comes to my mind: Implement the two events in different classes. If you desperately want the implementation to be in a single class, delegate to it from the action classes.

I prefer also the solution from Olaf, to take two separate classes. But if you have hard requirements to use olny one class, then you can try to recognise the event type about the called stack trace.
private void printStackTrace() {
StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace();
for (StackTraceElement stackTraceElement : stackTrace) {
System.out.println(stackTraceElement.getClassName() + "." + stackTraceElement.getMethodName());
}
}

Related

Is It Possible For Parent And Child Views To Reference Each Other By Injection/Find Before Init Finishes?

I want an embedded view to be able to call a function from the parent view, so I'm trying to have the child reference its parent by injection. This seems to work fine as long as the embedded view is created onDock:
class TestView : View() {
override val root = vbox {
label("Parent Label")
}
init {
println("Parent is instantiating.")
}
override fun onDock() {
val child = find(TestView2::class)
root.add(child)
}
fun doThing() {
println("Parent is doing a thing.")
}
}
class TestView2 : View() {
val parentClass: TestView by inject()
override val root = hbox {
label("Sub-view label 1")
label("Sub-view label 2")
}
init {
println("Sub-view is instantiating.")
parentClass.doThing()
}
}
I'd like it to be cleaner though. I'd prefer it if I was able to use the find function while creating the parent root. That's a problem, as calling the child view within any part of the init process creates a circular instantiation loop. Any way to avoid this or will I just have to settle for onDock and deal with it?
EDIT:
Just to note, I tried the onDock method again in a real, more complicated application and I got a cycle detection error. So even that method is not guaranteed to work.
You can create cyclic dependencies, but you can't call functions in both component's init block, as that would be impossible to resolve. The main takeaway here is that you're probably doing something you shouldn't. Views should not communicate with each other directly. This creates a tight coupling and prevents reuse. Instead you should communicate with one of the following:
State from a ViewModel
Controller function calls
Events using the EventBus
Since your code example is made up, it's not known what exactly you're trying to achieve in your actual app, but you will find the correct approach in the list above.
I see the urge to call functions in views and setting data directly into ui components instead of using bindings a lot, and in absolutely every case there is a much better way to solve the problem :)

How to register a Renderer with CRaSH

After reading about the remote shell in the Spring Boot documentation I started playing around with it. I implemented a new Command that produces a Stream of one of my database entities called company.
This works fine. So I want to output my stream of companies in the console. This is done by calling toString() by default. While this seams reasonable there is also a way to get nicer results by using a Renderer.
Implementing one should be straight forward as I can delegate most of the work to one of the already existing ones. I use MapRenderer.
class CompanyRenderer extends Renderer<Company> {
private final mapRenderer = new MapRenderer()
#Override Class<Company> getType() { Company }
#Override LineRenderer renderer(Iterator<Company> stream) {
def list = []
stream.forEachRemaining({
list.add([id: it.id, name: it.name])
})
return mapRenderer.renderer(list.iterator())
}
}
As you can see I just take some fields from my entity put them into a Mapand then delegate to a instance of MapRenderer to do the real work.
TL;DR
Only problem is: How do I register my Renderer with CRaSH?
Links
Spring Boot documentation http://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-remote-shell.html
CRaSH documentation (not helping) http://www.crashub.org/1.3/reference.html#_renderers

Is it possible to provide UI selectable options to custom msbuild tasks?

I have built a custom msbuild task that I use to convert 3D models in the format I use in my engine. However there are some optional behaviours that I would like to provide. For example allowing the user to choose whether to compute the tangent array or not, whether to reverse the winding order of the indices, etc.
In the actual UI where you select the Build action for each file, is it possible to define custom fields that would then be fed to the input parameters of the task? Such as a "Compute Tangents" dropbox where you can choose True or False?
If that is possible, how? Are there any alternatives besides defining multiple tasks? I.e. ConvertModelTask, ConvertModelComputeTangentTask, ConvertModelReverseIndicesTask, etc.
Everything in a MsBuild Custom Task, has to have "settable properties" to drive behavior.
Option 1.
Define an ENUM-esque to drive you behavior.
From memory, the MSBuild.ExtensionPack.tasks and MSBuild.ExtensionPack.Xml.XmlFile TaskAction="ReadElementText" does this type of thing.
The "TaskAction" is the enum-esque thing. I say "esque", because all you can do on the outside is set a string. and then in the code, convert the string to an internal enum.
See code here:
http://searchcode.com/codesearch/view/14325280
Option 2: You can still use OO on the tasks. Create a BaseTask (abstract) for shared logic), and then subclass it, and make the other class a subclass, and the msbuild task that you call.
SvnExport does this. SvnClient is the base class. And it has several subclasses.
See code here:
https://github.com/loresoft/msbuildtasks/blob/master/Source/MSBuild.Community.Tasks/Subversion/SvnExport.cs
You can probably dive deep with EnvDTE or UITypeEditor but since you already have a custom task why not keep it simple with a basic WinForm?
namespace ClassLibrary1
{
public class Class1 : Task
{
public bool ComputeTangents { set { _computeTangents = value; } }
private bool? _computeTangents;
public override bool Execute()
{
if (!_computeTangents.HasValue)
using (var form1 = new Form1())
{
form1.ShowDialog();
_computeTangents = form1.checkBox1.Checked;
}
Log.LogMessage("Compute Tangents: {0}", _computeTangents.Value);
return !Log.HasLoggedErrors;
}
}
}

Lazy generic delegate initialisation using Ninject

I'm using Ninject 1.0 and would like to be able to inject lazy initialisation delegates into constructors. So, given the generic delegate definition:
public delegate T LazyGet<T>();
I'd simply like to bind this to IKernel.Get() so that I can pass a lazy getter into constructors, e.g.
public class Foo
{
readonly LazyGet<Bar> getBar;
public Foo( LazyGet<Bar> getBar )
{
this.getBar = getBar;
}
}
However, I can't simply call Bind<LazyGet<T>>() because it's an open generic type. I need this to be an open generic so that I don't have to Bind all the different lazy gets to explicit types. In the above example, it should be possible to create a generic delegate dynamically that invokes IKernel.Get<T>().
How can this be achieved with Ninject 1.0?
Don't exactly understand the question, but could you use reflection? Something like:
// the type of T you want to use
Type bindType;
// the kernel you want to use
IKernel k;
// note - not compile tested
MethodInfo openGet = typeof(IKernel).GetMethod("Get`1");
MethodInfo constGet = openGet.MakeGenericMethod(bindType);
Type delegateType = typeof(LazyGet<>).MakeGenericType(bindType);
Delegate lazyGet = Delegate.CreateDelegate(delegateType, k, constGet);
Would using lazyGet allow you to do what you want? Note that you may have to call the Foo class by reflection as well, if bindType isn't known in the compile context.
I am fairly certain that the only way to do this (without some dirty reflection code) is to bind your delegate with type params. This will mean it needs to be done for each individual type you use. You could possibly use a BindingGenerator to do this in bulk, but it could get a bit ugly.
If there is a better solution (a clean one) I would love to hear it as I run into this problem from time to time.
From another similar question I answered:
public class Module : NinjectModule
{
public override void Load()
{
Bind(typeof(Lazy<>)).ToMethod(ctx =>
GetType()
.GetMethod("GetLazyProvider", BindingFlags.Instance | BindingFlags.NonPublic)
.MakeGenericMethod(ctx.GenericArguments[0])
.Invoke(this, new object[] { ctx.Kernel }));
}
protected Lazy<T> GetLazyProvider<T>(IKernel kernel)
{
return new Lazy<T>(() => kernel.Get<T>());
}
}

Enterprise Library Validation Block - Should validation be placed on class or interface?

I am not sure where the best place to put validation (using the Enterprise Library Validation Block) is? Should it be on the class or on the interface?
Things that may effect it
Validation rules would not be changed in classes which inherit from the interface.
Validation rules would not be changed in classes which inherit from the class.
Inheritance will occur from the class in most cases - I suspect some fringe cases to inherit from the interface (but I would try and avoid it).
The interface main use is for DI which will be done with the Unity block.
The way you are trying to use the Validation Block with DI, I dont think its a problem if you set the attributes at interface level. Also, I dont think it should create problems in the inheritance chain. However, I have mostly seen this block used at class level, with an intent to keep interfaces not over specify things. IMO i dont see a big threat in doing this.
Be very careful here, your test is too simple.
This will not work as you expect for SelfValidation Validators or Class Validators, only for the simple property validators like you have there.
Also, if you are using the PropertyProxyValidator in an ASP.NET page, iI don;t believe it will work either, because it only looks a field validators, not inherited/implemented validators...
Yes big holes in the VAB if you ask me..
For the sake of completeness I decided to write a small test to make sure it would work as expected and it does, I'm just posting it here in case anyone else wants it in future.
using System;
using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
ISpike spike = new Spike();
spike.Name = "A really long name that will fail.";
ValidationResults r = Validation.Validate<ISpike>(spike);
if (!r.IsValid)
{
throw new InvalidOperationException("Validation error found.");
}
}
}
public class Spike : ConsoleApplication1.ISpike
{
public string Name { get; set; }
}
interface ISpike
{
[StringLengthValidator(2, 5)]
string Name { get; set; }
}
}
What version of Enterprise Library are you using for your code example? I tried it using Enterprise Library 5.0, but it didn't work.
I tracked it down to the following section of code w/in the EL5.0 source code:
[namespace Microsoft.Practices.EnterpriseLibrary.Validation]
[public static class Validation]
public static ValidationResults Validate<T>(T target, ValidationSpecificationSource source)
{
Type targetType = target != null ? target.GetType() : typeof(T);
Validator validator = ValidationFactory.CreateValidator(targetType, source);
return validator.Validate(target);
}
If the target object is defined, then target.GetType() will return the most specific class definition, NOT the interface definition.
My workaround is to replace your line:
ValidationResults r = Validation.Validate<ISpike>(spike);
With:
ValidationResults r ValidationFactory.CreateValidator<ISpike>().Validate(spike);
This got it working for me.

Resources