Elsa Workflow best practice short living workflow - elsa-workflows

I have a short-living elsaworkflow.
It gets a list of tasks passed, and the workflow should rearrange the tasks in some given ways.
For this, some actions are grouped in a workflow.
Currently, I'm working with the WorkflowExecutionContext.Input.
For example, the GoToFirstTaskaction looks like this:
if (context.WorkflowExecutionContext.Input is Worksheet wks)
{
wks.CurrentTask= wks.TaskPool.First();
}
Is this the intended way? Or should I think about context providers?
Can someone point me to an example?

Your approach seems fine to me. Although you might indeed consider implementing a context provider, I doubt it adds much benefits.
Here's what it would look like to implement a context provider:
public class WorksheetContextProvider : ILoadWorkflowContext
{
public IEnumerable<Type> SupportedTypes => new[] { typeof(Worksheet) };
public ValueTask<object?> LoadAsync(LoadWorkflowContext context, CancellationToken cancellationToken = default)
{
return ValueTask.FromResult(context.WorkflowExecutionContext.Input as Worksheet);
}
}
Now, if your workflow definition's Workflow Context Type is configured with the Worksheet assembly-qualified typename, the workflow runner will invoke your custom context provider before executing the workflow.
Your activities would no be able to do this:
if (context.WorkflowExecutionContext.WorkflowContext is Worksheet wks)
{
wks.CurrentTask= wks.TaskPool.First();
}
As you can see, it's almost similar, and even has a slight disadvantage: the workflow now must be configured with the context type (MyAssembly.Domain.Entities.Worksheet, MyAssembly). Which doesn't help the user.
Your approach is good because it allows for simple usage from the user's point of view.

Related

Seemingly redundant event and event handlers

I will explain with an example. My GWT project has a Company module, which lets a user add, edit, delete, select and list companies.
Of these, the add, edit and delete operations lands back the user on the CompanyList page.
Thus, having three different events - CompanyAddedEvent, CompanyUpdatedEvent and CompanyDeletedEvent, and their respective event handlers - seems overkill to me, as there is absolutely not difference in their function.
Is it OK to let a single event manage the three operations?
One alternative I think is to use some event like CompanyListInvokedEvent. However, somewhere I think its not appropriate, is the event actually is not the list being invoked, but a company being added/updated/deleted.
If it had been only a single module, I would have get the task done with three separate events. But other 10 such modules are facing this dilemma. It means 10x3 = 30 event classes along with their 30 respective handlers. The number is large enough for me to reconsider.
What would be a good solution to this?
UPDATE -
#ColinAlworth's answer made me realize that I could easily use Generics instead of my stupid solution. The following code represents an event EntityUpdatedEvent, which would be raised whenever an entity is updated.
Event handler class -
public class EntityUpdatedEvent<T> extends GwtEvent<EntityUpdatedEventHandler<T>>{
private Type<EntityUpdatedEventHandler<T>> type;
private final String statusMessage;
public EntityUpdatedEvent(Type<EntityUpdatedEventHandler<T>> type, String statusMessage) {
this.statusMessage = statusMessage;
this.type = type;
}
public String getStatusMessage() {
return this.statusMessage;
}
#Override
public com.google.gwt.event.shared.GwtEvent.Type<EntityUpdatedEventHandler<T>> getAssociatedType() {
return this.type;
}
#Override
protected void dispatch(EntityUpdatedEventHandler<T> handler) {
handler.onEventRaised(this);
}
}
Event handler interface -
public interface EntityUpdatedEventHandler<T> extends EventHandler {
void onEventRaised(EntityUpdatedEvent<T> event);
}
Adding the handler to event bus -
eventBus.addHandler(CompanyEventHandlerTypes.CompanyUpdated, new EntityUpdatedEventHandler<Company>() {
#Override
public void onEventRaised(EntityUpdatedEvent<Company> event) {
History.newItem(CompanyToken.CompanyList.name());
Presenter presenter = new CompanyListPresenter(serviceBundle, eventBus, new CompanyListView(), event.getStatusMessage());
presenter.go(container);
}
});
Likewise, I have two other Added and Deleted generic events, thus eliminating entire redundancy from my event-related codebase.
Are there any suggestions on this solution?
P.S. > This discussion provides more insight on this problem.
To answer this question, let me first pose another way of thinking about this same kind of problem - instead of events, we'll just use methods.
In my tiered application, two modules communicate via an interface (notice that these methods are all void, so they are rather like events - the caller doesn't expect an answer back):
package com.acme.project;
public interface CompanyServiceInteface {
public void addCompany(CompanyDto company) throws AcmeBusinessLogicException;
public void updateCompany(CompanyDto company) throws AcmeBusinessLogicException;
public void deleteCompany(CompanyDto company) throws AcmeBusinessLogicException;
}
This seems like overkill to me - why not just reduce the size of this API to one method, and add an enum argument to simplify this. This way, when I build an alternative implementation or need to mock this in my unit tests, I just have one method to build instead of three. This gets to be clearly overkill when I make the rest of my application - why not just ObjectServiceInterface.modify(Object someDto, OperationEnum invocation); to work for all 10 modules?
One answer is that you might want want to drastically modify the implementation of one but not the others - now that you've reduced this to just one method, all of this belongs inside that switch case. Another is that once simplified in this way, the inclination often to further simplify - perhaps to combine create and update into just one method. Once this is done, all callsites must make sure to fulfill all possible details of that method's contract instead of just the one specific one.
If the receivers of those events are simple and will remain so, there may be no good reason to not just have a single ModelModifiedEvent that clearly is generic enough for all possible use cases - perhaps just wrapping the ID to request that all client modules refresh their view of that object. If a future use case arises where only one kind of event is important, now the event must change, as must all sites that cause the event to be created so that they properly populate this new field.
Java shops typically don't use Java because it is the prettiest language, or because it is the easiest language to write or find developers for, but because it is relatively easy to maintain and refactor. When designing an API, it is important to consider future needs, but also to think about what it will take to modify the current API - your IDE almost certainly has a shortcut key to find all invocations of a particular method or constructor, allowing you to easily find all places where that is used and update them. So consider what other use cases you expect, and how easily the rest of the codebase can be udpated.
Finally, don't forget about generics - for my example above, I would probably make a DtoServiceInterface to simplify matters, so that I just declare the one interface with three methods, and implement it and refer to it as needed. In the same way, you can make one set of three GwtEvent types (with *Handler interfaces and possibly Has*Handlers as well), but keep them generic for all possible types. Consider com.google.gwt.event.logical.shared.SelectionEvent<T> as an example here - in your case you would probably want to make the model object type a parameter so that handlers can check which type of event they are dealing with (remember that generics are erased in Java), or source from one EventBus for each model type.

How can I modify the fixture my custom theory data attribute creates for AutoFixture?

I'm really appreciating the power of AutoFixture coupled with XUnit's theories. I've recently adopted the use of encapsulating customizations and providing them to my tests via an attribute.
On certain occasions, I need a one-off scenario to run my test with. When I use an AutoDomainDataAttribute, like above, can I ask for an IFixture and expect to get the same instance created by the attribute?
In my scenario, I'm using MultipleCustomization by default for collections, etc. However, in this one case, I want only a single item sent to my SUT's constructor. So, I've defined my test method like so:
[Theory, AutoDomainData]
public void SomeTest(IFixture fixture) {
fixture.RepeatCount = 1;
var sut = fixture.CreateAnonymous<Product>();
...
}
Unfortunately, I'm getting an exception when creating the anonymous Product. Other tests work just fine, if I ask for a Product as a method parameter with those attributes. It's only an issue in this particular case, where I'm hoping that the fixture parameter is the same one created by my AutoDomainDataAttribute.
Product's constructor expects an IEnumerable that normally gets populate with 3 items, due to the customizations I have in-place via AutoDomainData. Currently, my DomainCustomization is a CompositeCustomization made up of MultipleCustomization and AutMoqCustomization, in that order.
The exception is: "InvalidCastException: Unable to cast object of type 'Castle.Proxies.ObjectProxy' to type 'Product'."
If you need the same Fixture instance as the one active in the attribute, you can inject the Fixture into itself in a Customization, like this:
public class InjectFixtureIntoItself : ICustomization
{
public void Customize(IFixture fixture)
{
fixture.Inject(fixture);
}
}
Just remember to add this to your CompositeCustomization before AutoMoqCustomization, since IFixture is an interface, and if AutoMoqCustomization comes first, you'll get a Mock instance instead - AFAICT, that's what's currently happening with the dynamic Castle proxy.
However, if you really need a Fixture instance, why not just write a regular, imperative test method:
[Fact]
public void SomeTest()
{
var fixture = new Fixture().Customize(new DomainCustomization());
fixture.RepeatCount = 1;
var sut = fixture.CreateAnonymous<Product>();
// ...
}
That seems to me to be much easier... I occasionally do this myself too...
Still, I wonder if you couldn't phrase your API or test case in a different way to make the whole issue go away. I very rarely find that I have to manipulate the RepeatCount property these days, so I wonder why you would want to do that?
That's probably the subject of a separate Stack Overflow question, though...

Command Pattern in .NET MVC 3 (removing junk from the controller)

I am trying to implement this Command Pattern on my .NET MVC 3 application, specifically for saving edits to a Thing. I am undecided on how to proceed. Before I get to the actual question, here is the simplified code:
public class ThingController
{
private readonly ICommandHandler<EditThingCommand> handler;
public ThingController(ICommandHandler<EditThingCommand> handler)
{
this.handler = handler;
}
public ActionMethod EditThing(int id)
{
...build EditThingViewModel and return with View...
}
[HttpPost]
public ActionMethod EditThing(int id, EditThingViewModel vm)
{
var command = new EditThingCommand
{
...not sure yet...
};
this.handler.Handle(command);
...redirect somewhere...
}
}
My EditThingViewModel is wholly disconnected from my domain, which consists of POCO classes. It seems like my EditThingCommand should look like this:
public class EditThingCommand
{
Thing ModifiedThing;
}
However, building ModifiedThing would then still be happening in my controller. That's the majority of the work in this case. By the time ModifiedThing is built (and the "old" timestamp applied to it for optimistic concurrency checking), all that's left is for command to call Update on my data context.
Clearly there is value in being able to easily decorate it with other commands, but I'd also like to be able to move the construction of ModifiedThing outside of my controller. (Perhaps this question is really just about that.) EditThingCommand is in my domain and doesn't have a reference to EditThingViewModel, so it can't go there. Does it make sense to have another command in my presentation layer for mapping my viewmodel to my poco entity?
I created an EditThingPostCommand outside of my domain, which takes the EditThingViewModel as a parameter. The EditThingPostCommandHandler is responsible for creating the EditThingCommand and calling its handler.
It works, but I'm not going to assume that's the best answer to my question. Arguably most of what the EditThingPostCommandHandler is doing could be done in a custom AutoMapper configuration, which would still serve the purpose of cleaning up the controller action method.
After several months of using this pattern on other projects, it is apparent to me that the commands on this particular project were simply too general and therefore too complex, requiring too much setup. It would have been better to create, for example, an EditThingTitleCommand and a MoveThingPiecesCommand and so on, and call them from their own ActionMethods.
In other words, when using the command pattern, don't just use the commands as replacements for typical CRUD operations. With more specificity comes more benefit.

What are the possible problems with unit testing ASP.NET MVC code in the following way?

I've been looking at the way unit testing is done in the NuGetGallery. I observed that when controllers are tested, service classes are mocked. This makes sense to me because while testing the controller logic, I didn't want to be worried about the architectural layers below. After using this approach for a while, I noticed how often I was running around fixing my mocks all over my controller tests when my service classes changed. To solve this problem, without consulting people that are smarter than me, I started writing tests like this (don't worry, I haven't gotten that far):
public class PersonController : Controller
{
private readonly LESRepository _repository;
public PersonController(LESRepository repository)
{
_repository = repository;
}
public ActionResult Index(int id)
{
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
public class PersonControllerTests
{
public void can_get_person()
{
var person = _helper.CreatePerson(username: "John");
var controller = new PersonController(_repository);
controller.FakeOutContext();
var result = (ViewResult)controller.Index(person.Id);
var model = (VMPerson)result.Model;
Assert.IsTrue(model.Person.Username == "John");
}
}
I guess this would be integration testing because I am using a real database (I'd prefer an inmemory one). I begin my test by putting data in my database (each test runs in a transaction and is rolled back when the test completes). Then I call my controller and I really don't care how it retrieves the data from the database (via a repository or service class) just that the Model to be sent to the view must have the record I put into the database aka my assertion. The cool thing about this approach is that a lot of times I can continue to add more layers of complexity without having to change my controller tests:
public class PersonController : Controller
{
private readonly LESRepository _repository;
private readonly PersonService _personService;
public PersonController(LESRepository repository)
{
_repository = repository;
_personService = new PersonService(_repository);
}
public ActionResult Index(int id)
{
var model = _personService.GetActivePerson(id);
if(model == null)
return PersonNotFoundResult();
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
Now I realize I didn't create an interface for my PersonService and pass it into the constructor of my controller. The reason is 1) I don't plan to mock my PersonService and 2) I didn't feel I needed to inject my dependency since my PersonController for now only needs to depend on one type of PersonService.
I'm new at unit testing and I'm always happy to be shown that I'm wrong. Please point out why the way I'm testng my controllers could be a really bad idea (besides the obvious increase in the time my tests will take to run).
Hmm. a few things here mate.
First, it looks like you're trying to test the a controller method. Great :)
So this means, that anything the controller needs, should be mocked. This is because
You don't want to worry about what happens inside that dependency.
You can verify that the dependency was called/executed.
Ok, so lets look at what you did and I'll see if i can refactor it to make it a bit more testable.
-REMEMBER- i'm testing the CONTROLLER METHOD, not the stuff the controller method calls/depends upon.
So this means I don't care about the service instance or the repository instance (which ever architectural way you decide to follow).
NOTE: I've kept things simple, so i've stripped lots of crap out, etc.
Interface
First, we need an interface for the repository. This can be implemented as a in-memory repo, an entity framework repo, etc.. You'll see why, soon.
public interface ILESRepository
{
IQueryable<Person> GetAll();
}
Controller
Here, we use the interface. This means it's really easy and awesome to use a mock IRepository or a real instance.
public class PersonController : Controller
{
private readonly ILESRepository _repository;
public PersonController(ILESRepository repository)
{
if (repository == null)
{
throw new ArgumentNullException("repository");
}
_repository = repository;
}
public ActionResult Index(int id)
{
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
Unit Test
Ok - here's the magic money shot stuff.
First, we create some Fake People. Just work with me here... I'll show you where we use this in a tick. It's just a boring, simple list of your POCO's.
public static class FakePeople()
{
public static IList<Person> GetSomeFakePeople()
{
return new List<Person>
{
new Person { Id = 1, Name = "John" },
new Person { Id = 2, Name = "Fred" },
new Person { Id = 3, Name = "Sally" },
}
}
}
Now we have the test itself. I'm using xUnit for my testing framework and moq for my mocking. Any framework is fine, here.
public class PersonControllerTests
{
[Fact]
public void GivenAListOfPeople_Index_Returns1Person()
{
// Arrange.
var mockRepository = new Mock<ILESRepository>();
mockRepository.Setup(x => x.GetAll<Person>())
.Returns(
FakePeople.GetSomeFakePeople()
.AsQueryable);
var controller = new PersonController(mockRepository);
controller.FakeOutContext();
// Act.
var result = controller.Index(person.Id) as ViewResult;
// Assert.
Assert.NotNull(result);
var model = result.Model as VMPerson;
Assert.NotNull(model);
Assert.Equal(1, model.Person.Id);
Assert.Equal("John", model.Person.Username);
// Make sure we actually called the GetAll<Person>() method on our mock.
mockRepository.Verify(x => x.GetAll<Person>(), Times.Once());
}
}
Ok, lets look at what I did.
First, I arrange my crap. I first create a mock of the ILESRepository.
Then i say: If anyone ever calls the GetAll<Person>() method, well .. don't -really- hit a database or a file or whatever .. just return a list of people, which created in FakePeople.GetSomeFakePeople().
So this is what would happen in the controller ...
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
First, we ask our mock to hit the GetAll<Person>() method. I just 'set it up' to return a list of people .. so then we have a list of 3 Person objects. Next, we then call a FirstOrDefault(...) on this list of 3 Person objects .. which returns the single object or null, depending on what the value of id is.
Tada! That's the money shot :)
Now back to the rest of the unit test.
We Act and then we Assert. Nothing hard there.
For bonus points, I verify that we've actually called the GetAll<Person>() method, on the mock .. inside the Controller's Index method. This is a safety call to make sure our controller logic (we're testing for) was done right.
Sometimes, you might want to check for bad scenario's, like a person passed in bad data. This means you might never ever get to the mock methods (which is correct) so you verify that they were never called.
Ok - questions, class?
Even when you do not plan to mock an interface, I strongly suggest you to do not hide the real dependencies of an object by creating the objects inside the constructor, you are breaking the Single Responsibility principle and you are writing un-testable code.
The most important thing to consider when writing tests is: "There is no magic key to write tests". There are a lot of tools out there to help you write tests but the real effort should be put in writing testable code rather than trying to hack our existing code to write a test which usually ends up being an integration test instead of a unit-test.
Creating a new object inside a constructor is one of the first big signals that your code is not testable.
These links helped me a lot when I was making the transition to start writing tests and let me tell you that after you start, that will become a natural part of your daily work and you will love the benefits of writing tests I can not picture myself writing code without tests anymore
Clean code guide (used in Google): http://misko.hevery.com/code-reviewers-guide/
To get more information read the following:
http://misko.hevery.com/2008/09/30/to-new-or-not-to-new/
and watch this video cast from Misko Hevery
http://www.youtube.com/watch?v=wEhu57pih5w&feature=player_embedded
Edited:
This article from Martin Fowler explain the difference between a Classical and a Mockist TDD approach
http://martinfowler.com/articles/mocksArentStubs.html
As a summary:
Classic TDD approach: This implies to test everything you can without creating substitutes or doubles (mocks, stubs, dummies) with the exception of external services like web services or databases. The Classical testers use doubles for the external services only
Benefits: When you test you are actually testing the wiring logic of your application and the logic itself (not in isolation)
Cons: If an error occurs you will see potentially hundreds of tests failing and it will be hard to find the code responsible
Mockist TDD approach: People following the Mockist approach will test in isolation all the code because they will create doubles for every dependency
Benefits: You are testing in isolation each part of your application. If an error occurs, you know exactly where it occurred because just a few tests will fail, ideally only one
Cons: Well you have to double all your dependencies which makes tests a little bit harder but you can use tools like AutoFixture to create doubles for the dependencies automatically
This is another great article about writing testable code
http://www.loosecouplings.com/2011/01/how-to-write-testable-code-overview.html
There are some downsides.
First, when you have a test that depends on an external component (like a live database), that test is no longer really predictable. It can fail for any number of reasons - a network outage, a changed password on the database account, missing some DLLs, etc. So when your test suddenly fails, you cannot be immediately sure where the flaw is. Is it a database problem? Some tricky bug in your class?
When you can immediately answer that question just by knowing which test failed, you have the enviable quality of defect localization.
Secondly, if there is a database problem, all your tests that depend on it will fail at once. This might not be so severe, since you can probably realize what the cause is, but I guarantee it will slow you down to examine each one. Widespread failures can mask real problems, because you don't want to look at the exception on each of 50 tests.
And I know you want to hear about factors besides the execution time, but that really does matter. You want to run your tests as frequently as possible, and a longer runtime discourages that.
I have two projects: one with 600+ tests that run in 10 seconds, one with 40+ tests that runs in 50 seconds (this project does actually talk to a database, on purpose). I run the faster test suite much more frequently while developing. Guess which one I find easier to work with?
All of that said, there is value in testing external components. Just not when you're unit-testing. Integration tests are more brittle, and slower. That makes them more expensive.
Accessing the database in unit tests has the following consequences:
Performance. Populating the database and accessing it is slow. The more tests you have, the longer the wait. If you used mocking your controller tests may run in a couple of milliseconds each, compared to seconds if it was using the database directly.
Complexity. For shared databases, you'll have to deal with concurrency issues where multiple agents are running tests against the same database. The database needs to be provisioned, structure needs to be created, data populated etc. It becomes rather complex.
Coverage. You mind find that some conditions are nearly impossible to test without mocking. Examples may include verifying what to do when the database times out. Or what to do if sending an email fails.
Maintenance. Changes to your database schema, especially if its frequent, will impact almost every test that uses the database. In the beginning when you have 10 tests it may not seem like much, but consider when you have 2000 tests. You may also find that changing business rules and adapting the tests to be more complex, as you'll have to modify the data populated in the database to verify the business rule.
You have to ask whether it is worth it for testing business rules. In most cases, the answer may be "no".
The approach I follow is:
Unit classes (controllers, service layers etc) by mocking out dependencies and simulating conditions that may occur (like database errors etc). These tests verify business logic and one aims to gain as much coverage of decision paths as possible. Use a tool like PEX to highlight any issues you never thought of. You'll be surprised how much robust (and resilient) your application would be after fixing some of the issues PEX highlights.
Write database tests to verify that the ORM I'm using works with the underlying database. You'll be surprised the issues EF and other ORM have with certain database engines (and versions). These tests are also useful to for tuning performance and reducing the amount of queries and data being sent to and from the database.
Write coded UI tests that automates the browser and verifies the system actually works. In this case I would populate the database before hand with some data. These tests simply automate the tests I would have done manually. The aim is to verify that critical pieces still work.

Loose programming in high level languages, how, why and how much?

I'm writing my code in Haxe. This is quite irrelevant to the question though, as long as you keep in mind that it's a high level language and compareable with Java, ActionScript, JavaScript, C#, etc. (I'm using pseudocode here).
I'm going to work on a big project and am busy preparing now. For this question I'll create a small scenario though: a simple application which has a Main class (this one is executed when the application launches) and a LoginScreen class (this is basically a class that loads a login screen so that the user can login).
Typically I guess this would look like the following:
Main constructor:
loginScreen = new LoginScreen()
loginScreen.load();
LoginScreen load():
niceBackground = loader.loadBitmap("somebg.png");
someButton = new gui.customButton();
someButton.onClick = buttonIsPressed;
LoginScreen buttonIsPressed():
socketConnection = new network.SocketConnection();
socketConnection.connect(host, ip);
socketConnection.write("login#auth#username#password");
socketConnection.onData = gotAuthConfirmation;
LoginScreen gotAuthConfirmation(response):
if response == "success" {
//login success.. continue
}
This simple scenario adds the following dependencies and downsides to our classes:
Main will not load without LoginScreen
LoginScreen will not load without the custom loader class
LoginScreen will not load without our custom button class
LoginScreen will not load without our custom SocketConnection class
SocketConnection (which will have to be accessed by a lot of different classes in the future) has been set inside LoginScreen now, which is actually quite irrelevant from it, apart from the fact that the LoginScreen requires a socket connection for the first time
To solve these problems, I have been suggested to do "Event-Driven-Programming", or loose coupling. As far as I understand, this basically means that one has to make classes independent from each other and then bind them together in separate binders.
So question 1: is my view on it true or false? Does one have to use binders?
I heard Aspect Oriented Programming could help here. Unfortunately Haxe does not support this configuration.
However, I do have access to an event library which basically allows me to create a signaller (public var loginPressedSignaller = new Signaller()), to fire a signaller (loginPressedSignaller.fire()) and to listen to a signalller (someClass.loginPressedSignaller.bind(doSomethingWhenLoginPressed)).
So, with little further investigation I figured this would change my previous setup to:
Main:
public var appLaunchedSignaller = new Signaller();
Main constructor:
appLaunchedSignaller.fire();
LoginScreen:
public var loginPressedSignaller = new Signaller();
LoginScreen load():
niceBackground = !!! Question 2: how do we use Event Driven Programming to load our background here, while not being dependent on the custom loader class !!!
someButton = !!! same as for niceBackground, but for the customButton class !!!
someButton.onClick = buttonIsPressed;
LoginScreen buttonIsPressed():
loginPressedSignaller.fire(username, pass);
LoginScreenAuthenticator:
public var loginSuccessSignaller = new Signaller();
public var loginFailSignaller = new Signaller();
LoginScreenAuthenticator auth(username, pass):
socketConnection = !!! how do we use a socket connection here, if we cannot call a custom socket connection class !!!
socketConnection.write("login#auth#username#password");
This code is not finished yet, eg. I still have to listen for the server response, but you probably understand where I am getting stuck.
Question 2: Does this new structure make any sense? how should I solve the problems above mentioned in the !!! delimiters?
Then I heard about binders. So maybe I need to create a binder for each class, to connect everything together. Something like this:
MainBinder:
feature = new Main();
LoginScreenBinder:
feature = new LoginScreen();
MainBinder.feature.appLaunchedSignaller.bind(feature.load);
niceBackgroundLoader = loader.loadBitmap;
someButtonClass = gui.customButton();
etc... hopefully you understand what I mean. This post is getting a bit long so I have to wrap it up a bit.
Question 3: does this make any sense? Doesn't this make things unnecessarily complex?
Also, in the above "Binders" I only had to use classes which are instantiated once, eg. a login screen. What if there are multiple instances of a class, eg. a Player Class in a game of chess.
well, concerning the how, I would point out my 5 commandments to you. :)
For this question only 3 are really important:
single responsibility (SRP)
interface segregation (ISP)
dependency inversion (DIP)
Starting off with SRP, you have to ask yourself the question: "What is the responsibility of class X?".
The login screen is responsible for presenting an interface to the user to fill in and submit his login data. Thus
it makes sense for it to depend on the button class, because it needs the button.
it makes no sense it does all the networking etc.
First of all, you let's abstract the login service:
interface ILoginService {
function login(user:String, pwd:String, onDone:LoginResult->Void):Void;
//Rather than using signalers and what-not, I'll just rely on haXe's support for functional style,
//which renders these cumbersome idioms from more classic languages quite obsolete.
}
enum Result<T> {//this is a generic enum to return results from basically any kind of actions, that may fail
Fail(error:Int, reason:String);
Success(user:T);
}
typedef LoginResult = Result<IUser>;//IUser basically represent an authenticated user
From the point of view of the Main class, the login screen looks like this:
interface ILoginInterface {
function show(inputHandler:String->String->Void):Void;
function hide():Void;
function error(reason:String):Void;
}
performing login:
var server:ILoginService = ... //where ever it comes from. I will say a word about that later
var login:ILoginInterface = ... //same thing as with the service
login.show(function (user, pwd):Void {
server.login(user, pwd, function (result) {
switch (result) {
case Fail(_, reason):
login.error(reason);
case Success(user):
login.hide();
//proceed with the resulting user
}
});
});//for the sake of conciseness I used an anonymous function but usually, you'd put a method here of course
Now ILoginService looks a little titchy. But to be honest, it does all it needs to do. Now it can effectively be implemented by a class Server, that encapsulates all networking in a single class, having a method for each of the N calls your actual server provides, but first of all, ISP suggests, that many client specific interfaces are better than one general purpose interface. For the same reason ILoginInterface is really kept to its bare minimum.
No matter, how these two are actually implemented, you will not need to change Main (unless of course the interface changes). This is DIP being applied. Main doesn't depend on the concrete implementation, only on a very concise abstraction.
Now let's have some implementations:
class LoginScreen implements ILoginInterface {
public function show(inputHandler:String->String->Void):Void {
//render the UI on the screen
//wait for the button to be clicked
//when done, call inputHandler with the input values from the respective fields
}
public function hide():Void {
//hide UI
}
public function error(reason:String):Void {
//display error message
}
public static function getInstance():LoginScreen {
//classical singleton instantiation
}
}
class Server implements ILoginService {
function new(host:String, port:Int) {
//init connection here for example
}
public static function getInstance():Server {
//classical singleton instantiation
}
public function login(user:String, pwd:String, onDone:LoginResult->Void) {
//issue login over the connection
//invoke the handler with the retrieved result
}
//... possibly other methods here, that are used by other classes
}
Ok, that was pretty straight forward, I suppose. But just for the fun of it, let's do something really idiotic:
class MailLogin implements ILoginInterface {
public function new(mail:String) {
//save address
}
public function show(inputHandler:String->String->Void):Void {
//print some sort of "waiting for authentication"-notification on screen
//send an email to the given address: "please respond with username:password"
//keep polling you mail server for a response, parse it and invoke the input handler
}
public function hide():Void {
//remove the "waiting for authentication"-notification
//send an email to the given address: "login successful"
}
public function error(reason:String):Void {
//send an email to the given address: "login failed. reason: [reason] please retry."
}
}
As pedestrian as this authentication may be, from the point of view of the Main class,
this doesn't change anything and thus will work just as well.
A more likely scenario is actually, that your login service is on another server (possibly an HTTP server), that makes the authentication, and in case of success creates a session on the actual app server. Design-wise, this could be reflected in two separate classes.
Now, let's talk about the "..." I left in Main. Well, I'm lazy, so I can tell you, in my code you are likely to see
var server:ILoginService = Server.getInstance();
var login:ILoginInterface = LoginScreen.getInstance();
Of course, this is far from being the clean way to do it. The truth is, it's the easiest way to go and the dependency is limited to one occurrence, that can later be removed through dependency injection.
Just as a simple example for an IoC-Container in haXe:
class Injector {
static var providers = new Hash < Void->Dynamic > ;
public static function setProvider<T>(type:Class<T>, provider:Void->T):Void {
var name = Type.getClassName(type);
if (providers.exists(name))
throw "duplicate provider for " + name;
else
providers.set(name, provider);
}
public static function get<T>(type:Class<T>):T {
var name = Type.getClassName(type);
return
if (providers.exists(name))
providers.get(name);
else
throw "no provider for " + name;
}
}
elegant usage (with using keyword):
using Injector;
//wherever you would like to wire it up:
ILoginService.setProvider(Server.getInstance);
ILoginInterface.setProvider(LoginScreen.getInstance);
//and in Main:
var server = ILoginService.get();
var login = ILoginInterface.get();
This way, you practically have no coupling between the individual classes.
As to the question how to pass events between the button and the login screen:
this is just a matter of taste and implementation.
The point of event driven programming is that both the source and the observer are only coupled in the sense,
that the source must be sending some sort of notification and the target must be able to handle it.
someButton.onClick = handler; basically does exactly that, but it's just so elegant and concise you don't make a fuzz about it.
someButton.onClick(handler); probably is a little better, since you can have multiple handlers, although this is rarely required of UI components. But in the end, if you want signalers, go with signalers.
Now when it comes to AOP, it is not the right approach in this situation. It's not a clever hack to wire up components between one another, but about dealing with cross-cutting concerns, such as adding a log, a history or even things as a persistence layer across a multitude of modules.
In general, try not to modularize or split the little parts of your application.
It is ok to have some spaghetti in your codebase, as long as
the spaghetti segments are well encapsulated
the spaghetti segments are small enough to be understood or otherwise refactored/rewritten in a reasonable amount of time, without breaking the app (which point no. 1 should guarantee)
Try rather to split the whole application into autonomous parts, which interact through concise interfaces. If a part grows too big, refactor it just the same way.
edit:
In response to Tom's questions:
that's a matter of taste. in some frameworks people go as far as using external configuration files, but that makes little sense with haXe, since you need to instruct the compiler to force compilation of the dependencies you inject at runtime. Setting up the dependency in your code, in a central file, is just as much work and far simpler. For more structure, you can split the app into "modules", each module having a loader class responsible for registering the implementations it provides. In your main file, you load the modules.
That depends. I tend to declare them in the package of the class depending on them and later on refactor them to an extra package in case they prove to be needed elsewhere. By using anonymous types, you can also completely decouple things, but you'll have a slight performance hit on platforms as flash9.
I wouldn't abstract the button and then inject an implementation through IoC, but feel free to do so. I would explicitely create it, because in the end, it's just a button. It has a style, a caption, screen position and size and fires click events. I think, this is unnecessary modularization, as pointed out above.
Stick to SRP. If you do, no class will grow unneccessarily big. The role of the Main class is to initialize the app. When done, it should pass control to a login controller, and when that controller acquires a user object, it can pass it on to the main controller of the actual app and so forth. I suggest you read a bit about behavioral patterns to get some ideas.
greetz
back2dos
First of all, I'm not familiar with Haxe at all. However, I would answer that what is described here sounds remarkably similar to how I've learned to do things in .NET, so it sounds to me like this is good practice.
In .NET, you have an "Event" that fires when a user clicks a button to do something (like logon) and then a method executes to "handle" the event.
There will always be code that describes what method is executed in one class when an event in another class is fired. It is not unnecessarily complex, it is necessarily complex. In the Visual Studio IDE, much of this code is hidden in "designer" files, so I don't see it on a regular basis, but if your IDE doesn't have this functionality, you've got to write the code yourself.
As for how this works with your custom loader class, I hope someone here can provide you an answer.

Resources