How to implement Abstract Factory for windowing systems - user-interface

I was just starting a new coding project. I may be ahead of myself, but I've gotten kinda stuck. I wanted to implement an Abstract Factory for the GUI, similar to the example on Wikipedia. However various systems have their own parameters for creating windows. At present I have come up with the following solutions to my dilemma:
Create a type which varies based on compiler directives
Don't use compiler directives and just put everything in a type that contains every possible data member
Create a polymorphic hierarchy and use dynamic casting inside each window function
Use some sort of intermediate singleton that holds the information. This seems esp. unhelpful and would likely also involve casting.
Use a different pattern, such as builder instead.
My objective is to create high level interfaces that are uniform, so that creating a window, etc. is the same for all platforms.
I hesitate to do #5 simply because it seems like this would be a common enough problem that there should already be a solution. This is just a toy, so it's more about learning than building a practical application. I know I could use existing code bases, but that wouldn't achieve my real objective.
Thanks in advance.

I think, it depends on the situation. But how about using abstract factory with builder (inside factory) and decorator with some default values for GUI componets, where decorator will have same interface for similar components from different GUI libraries and extends class from GUI library.

After reading more I've realized I can use Dependency Injection to create the concrete factory first. Since entry point knows what kind of factory it's using, that can be passed to the client. I can't believe I didn't see it before, but I don't think that Dependency Injection "clicked" until now.

I would put the system-specific parameters in the constructor for each abstract factory.
public interface WindowFactory {
public Window build();
}
public class WindowsWindowFactory implements WindowFactory {
private param1, param2, param3;
public WindowsWindowFactory(param1,param2,param3) {} // set params
public Window build() {} // use params
}
public class LinuxWindowFactory implements WindowFactory {
private param1, param2;
public LinuxWindowFactory(param1,param2) {} // set params
public Window build() {} // use params
}

Related

Is there any way to intercept all Linq to SQL queries?

I've built some code that can rebuild expression trees so I can avoid triggering the no supported translation to SQL exception and it works fine as long as I call my function to replace the iqueryable. The problem is that I'd like it to automatically be applied to all queries in my project without having to worry about calling this function on each one separately. Is there any way that I can intercept everything?
I've tried using Reflection.Emit to create a wrapping provider and using reflection to replace it on the data context and it turns out that even with Reflection.Emit I can't implement the internal IProvider interface.
I've also tried replacing the provider with a RealProxy based class and that works for non-compiled queries, but the CompiledQuery.Execute method is throwing an exception because it won't cast to the SqlProvider class. I tried replacing the response to the Compile method on the provider with another proxy so I could intercept the Execute call, but that failed a check on the return type being correct.
I'm open to any other ideas or ways of using what I've already tried?
It's hard to tell whether this is an applicable solution without seeing your code, but if you have a DI-friendly app architecture you can implement an interceptor and have your favorite IoC container emit the appropriate type for you, at run-time.
Esoteric? A little. Consider an interface like this:
public interface ISomeService
{
IEnumerable<SomeEntity> GetSomeEntities();
// ...
}
This interface might be implemented like this:
public class SomeService : ISomeService
{
private readonly DbContext _context // this is a dependency!
private readonly IQueryTweaker _tweaker; // this is a dependency!
public SomeService(DbContext context, IQueryTweaker tweaker) // this is constructor injection!
{
_context = context;
_tweaker = tweaker;
}
public IEnumerable<SomeEntity> GetSomeEntities()
{
return _tweaker.TweakTheQuery(_context.SomeEntities).ToList();
}
}
Every time you implement a method of the ISomeService interface, there's always a call to _tweaker.TweakTheQuery() that wraps the IQueryable, and that not only gets boring, it also feels like something is missing a feature - the same feeling you'd get by wrapping every one of these calls inside a try/catch block, or if you're familiar with MVVM in WPF, by raising this annoying PropertyChanged event for every single property setter in your ViewModel.
With DI Interception, you factor this requirement out of your "normal" code and into an "interceptor": you basically tell the IoC container that instead of binding ISomeService directly to the SomeService implementation, you're going to be decorating it with an interceptor, and emit another type, perhaps SomeInterceptedService (the name is irrelevant, the actual type only exists at run-time) which "injects" the desired behavior into the desired methods. Simple? Not exactly.
If you haven't designed your code with DI in mind (are your dependencies "injected" into your classes' constructor?), it could mean a major refactoring.
The first step breaks your code: remove the IQueryTweaker dependency and all the TweakTheQuery calls from all ISomeService implementations, to make them look like this - notice the virtualness of the method to be intercepted:
public class SomeService : ISomeService
{
private readonly DbContext _context
public SomeService(DbContext context)
{
_context = context;
}
public virtual IEnumerable<SomeEntity> GetSomeEntities()
{
return _context.SomeEntities.ToList();
}
}
The next step is to configure the IoC container so that it knows to inject the SomeService implementation whenever a type's constructor requires an ISomeService:
_kernel.Bind<ISomeService>().To<SomeService>();
At that point you're ready to configure the interception - if using Ninject this could help.
But before jumping into that rabbit's hole you should read this article which shows how decorator and interceptor are related.
The key point is, you're not intercepting anything that's internal to LINQ to SQL or the .NET framework itself - you're intercepting your own method calls, wrapping them with your own code, and with a little bit of help from any decent IoC container, you'll be intercepting the calls to methods that call upon Linq to SQL, rather than the direct calls to Linq to SQL itself. Essentially the IQueryTweaker dependency becomes a dependency of your interceptor class, and you'll only code its usage once.
An interesting thing about DI interception, is that interceptors can be combined, so you can have a ExecutionTimerServiceInterceptor on top of a AuditServiceInterceptor, on top of a CircuitBreakerServiceInterceptor... and the best part is that you can configure your IoC container so that you can completely forget it exists and, as you add more service classes to the application, all you need to do is follow a naming convention you've defined and voilà, you've just written a service that not only accomplishes all the strictly data-related tasks you've just coded, but also a service that will disable itself for 3 minutes if the database server is down, and will remain disabled until it's back up; that service also logs all inserts, updates and deletes, and stores its execution time in a database for performance analysis. The term automagical seems appropriate.
This technique - interception - can be used to address cross-cutting concerns; another way to address those is through AOP, although some articles (and Mark Seeman's excellent Dependency Injection in .NET) clearly demonstrate how AOP frameworks are a less ideal solution over DI interception.

Spring with Neo4j, GraphRepository<?> vs handmade interface

I found out that there is an interface called GraphRepository. I have a repository for users implementing a homemade interface that does its job, but I was wondering, shouldn't I implement GraphRepository instead ? Even if it will be quite long to implement and some methods will be useless, I think it is a standard and I already re-coded a lot of methods that are defined in this interface.
So should I write "YAGNI" code or not respect the standard ?
What is your advice ?
you don't need to actually implement GraphRepository but extend it. the principals of Spring-Data is that all the boiler-plate CRUD code is taken care of (by proxying at startup time) so all you have to do is create an interface for your specific entity extending GraphRepository and then add only specific methods that you require.
for example; if i have an entity CustomerNode, to create standard CRUD methods, i can create a new interface CustomerNodeRepository extends GraphRepository<CustomerNode,Long>. all the methods from GraphRepository (e.g. save, findAll, findOne, delete, deleteAll, etc.) are now accessible from CustomerNodeRepository and implemented by Spring-Data-Neo4J without having to write a single line of implementation code.
the pattern now allows you to work on your specific repository code (e.g. findByNameAndDateOfBirth) rather than the simple CRUD stuff.
Spring-Data package is very useful for repository interaction. it can reduce huge amounts of code (have seen 80%+ reduction in code lines) and would highly recommend using it
edit: implementing custom execution
if you want to add your own custom behavior to a Repository method, you create the concept of merging interfaces and custom implementation. for example, lets say i want to create a method called findCustomerNodeBySomeStrangeCriteria and to do this, i actually want to link off to a relational database to perform the function.
first we define a separate, standalone interface that only includes our 'extra' method.
public interface CustomCustomerNodeRepository {
List<CustomerNode> findCustomerNodeBySomeStrangeCriteria(Object strangeCriteria);
}
next we update our normal interface to extend not only GraphRepository, but our new custom one too
public interface CustomerNodeRepository extends GraphRepository<CustomerNode,Long>, CustomCustomerNodeRepository {
}
the last piece, is to actually implement our findCustomerNodeBySomeStrangeCriteria method
public class CustomerNodeRepositoryImpl implements CustomCustomerNodeRepository {
public List<CustomerNode> findCustomerNodeBySomeStrangeCriteria(Object criteria) {
//implementation code
}
}
so, there's a couple of points to note;
we create a separate interface to define any custom methods that have custom implementations (as distinct from Spring-Data compatible "findBy..." methods)
our CustomerNodeRepository interface (our 'main' interface) extends both the GraphRepository and our 'custom' one
we implement only the 'custom' method in a class that implements only the custom interface
the 'custom' implementation class must (by default) be called our 'main' interface Impl to be picked up by Spring Data (so in this case CustomNodeRepositoryImpl)
under the covers, Spring Data delivers a proxy implementation of CustomerNodeRepository as a merge of the auto-built GraphRepository and our class implementing CustomCustomerNodeRepository. the reason for the name of the class is to allow Spring Data to pick it up easily/successfully (this can be overwritten so it doesn't look for *Impl)

SD MongoDB polymorphism in subdocument

I just started developing some app in Java with spring-data-mongodb and came across some issue that I haven't been able to solve:
Have a couple of document beans like this:
#Document(collection="myBeanBar")
public class BarImpl implements Bar {
String id;
Foo foo;
// More fields and methods ...
}
#Docuemnt
public class FooImpl implements Foo {
String id;
String someField;
// some more fields and methods ...
}
And I have a repository class with a method that simply invokes a find similar to this:
public List<? extends Bar> findByFooField(final String fieldValue) {
Query query = Query.query(Criteria.where("foo.someField").is(fieldValue));
return getMongoOperations().find(query, BarImpl.class);
}
Saving a Bar works just fine, it would save it in mongo along with the "_class" attribute for both Foo and Bar. However, finding by some attribute in Foo would throw an exception like this:
Exception in thread "main" java.lang.IllegalArgumentException: No property someField found on test.Foo!
at org.springframework.data.mapping.context.AbstractMappingContext.getPersistentPropertyPath(AbstractMappingContext.java:225)
at org.springframework.data.mongodb.core.convert.QueryMapper.getPath(QueryMapper.java:202)
at org.springframework.data.mongodb.core.convert.QueryMapper.getTargetProperty(QueryMapper.java:190)
at org.springframework.data.mongodb.core.convert.QueryMapper.getMappedObject(QueryMapper.java:86)
at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1336)
at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1322)
at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:495)
at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:486)
Which, after some digging, makes some sense, since nowhere in the query is the sub-document concrete type being specified, and the Entity Information of Bar says the type of foo is Foo (not FooImpl), which in turn can not have properties cause it is an interface.
My question is: Is there a way to specify it or work-around this issue without declaring the sub-document type as a concrete type?
I've been googling it for a couple of days and looking at the documentation and API and the source code but I can not find a clear way to do it. I'd really appreciate your help.
Thank you very much.
I had a similar problem, I have a class that implements an interface and when I use findAll I get the error:
org.springframework.data.mapping.model.MappingInstantiationException: Could not instantiate bean class [test.MetaClasse]: Specified class is an interface.
After debugging SpringData code, I realized that Mapper uses #TypeAlias to discover the type it has to instantiate, so I just put #TypeAlias("FullClassName") on my implementations of test.MetaClasse and it worked!
I tested with your situation and it will work!
Like mentioned in this comment, the solution with having full class name in the type alias is imperfect as it might make refactoring cumbersome.
Instead you can just configure type mappings and make it work automagically. Here's how:
First you'll need to annotate BarImpl and FooImpl with #TypeAlias. It doesn't have to be a full class name, could be anything else. For example #TypeAlias("bar_impl") and #TypeAlias("foo_impl") respectively.
Then we’re going to need the reflections library. Pick the latest version for the build tool of your choice here.
For example with Gradle:
implementation("org.reflections:reflections:0.10.2")
Now we’re going to need a small extension to DefaultMongoTypeMapper to make it easy to configure and instantiate. Here’s how it would look in Kotlin:
class ReflectiveMongoTypeMapper(
private val reflections: Reflections = Reflections("com.example")
) : DefaultMongoTypeMapper(
DEFAULT_TYPE_KEY,
listOf(
ConfigurableTypeInformationMapper(
reflections.getTypesAnnotatedWith(TypeAlias::class.java).associateWith { clazz ->
getAnnotation(clazz, TypeAlias::class.java)!!.value
}
),
SimpleTypeInformationMapper(),
)
)
where com.example is either your base package or the package with MongoDB models.
This way we will find all classes annotated with #TypeAlias and register alias to type mappings.
Next we'll need to adjust the app's mongo configuration a bit. The configuration has to extend AbstractMongoClientConfiguration and we need to override method mappingMongoConverter to make use of the mapper we created before. It should look like this:
override fun mappingMongoConverter(
databaseFactory: MongoDatabaseFactory,
customConversions: MongoCustomConversions,
mappingContext: MongoMappingContext,
) = super.mappingMongoConverter(databaseFactory, customConversions, mappingContext).apply {
setTypeMapper(ReflectiveMongoTypeMapper())
}
Done!
Now all alias to type mappings will be registered automatically on context startup and all your polymorphic fields will work just fine.
You can check the full code example on GitHub.
Also, here's a blog post where you can read about the root cause of this issue as well as check other ways to solve it (in case you don't want to rely on reflection): https://blog.monosoul.dev/2022/09/16/spring-data-mongodb-polymorphic-fields/

When should I use binding annotations vs more-specific interfaces?

Question
What criteria should be used when deciding between:
specifying a dependency with an annotation, and
specifying a dependency with a more specific interface
Example
Suppose I have:
interface FooLoader {
Foo loadById(long id);
}
class DBFooLoader implements FooLoader {
... jdbc etc. etc. ...
}
class CachingFooLoader implements FooLoader {
...
#Inject
public CachingFooLoader(FooLoader delegate) {
this.delegate = delegate;
}
...
}
Suppose I want to bind FooLoader to CachingFooLoader, I have [at least] two ways to wire this:
Use an annotation binding
Change:
public CachingFooLoader(FooLoader delegate)
to:
public CachingFooLoader(#NonCaching FooLoader delegate)
and then:
bind(FooLoader.class).annotatedWith(NonCaching.class).to(DBFooLoader.class);
Create a more specific interface
Change:
public CachingFooLoader(FooLoader delegate)
to:
public CachingFooLoader(NonCachingFooLoader delegate)
where NonCachingFooLoader simply extends FooLoader, and then have DBFooLoader implement NonCachingFooLoader, and wire up accordingly.
My thoughts
I am drawn to using an annotation binding for multiple reasons:
Keys can be more easily reused than interfaces, which decreases the combinatorial explosion that interfaces would suffer from.
It is less invasive: configuration stays in Guice modules, rather than "poisoning" classes.
However, creating a more specific interface has its advantages too:
Interfaces have more meaning. Typically only Guice will read the annotation, where as interfaces are used for much more.
So, what criteria should be used to determine which approach to take?
(Spring users, as far as I can tell, this is what you guys call qualifiers.)
Use specific interfaces only if it makes sense, i.e. they have to offer a different API and thus other classes will use them in a specific way.
If they offer the same "service" in different ways, then use only one common interface and differentiate implementations with annotations.

Castle Windsor - Lookup Method Injection for transient instances

The short question:
Does Castle Windsor have something similar to Spring.Net's "Lookup Method Injection" that can be configured from XML, which provides the ability to fetch transient instances from the container without the class being aware of the IoC container?
The long question:
I'm a long time Spring/Spring.Net user and I have been experimenting with Castle Windsor, by trying to port a project over to it. Spring.Net has a concept of "Lookup Method Injection" which (from the Spring docs)...
Lookup method injection is the ability of the container to override methods on container managed objects, to return the result of looking up another named object in the container. The lookup typically involves a prototype object as in the scenario described in the preceding section. The Spring framework implements this method injection by a dynamically generating a subclass overriding the method using the classes in the System.Reflection.Emit namespace.
What this means is, If I had the following...
public class SomeTransient
{
// ... I have dependencies that need to be filled by IoC container
}
public class SomeClass
{
public virtual void Work()
{
var o = CreateTransient();
}
public virtual SomeTransient CreateTransient() { }
}
I can instruct Spring to override the CreateTransient method, and have that method return a new container created transient instance (with it's dependencies initialized) each time the method is called.
The unique part of this is, it doesn't require direct links to the Spring Framework (eg. SomeClass doesn't have to implement a specific interface).
Is there something similar in Castle Windsor to accomplish this via XML?
(I will eventually move away from XML config, but at the moment I'm just trying to get it running)
Castle has something better; Typed Factories.
You can also inject even a delegate!
http://stw.castleproject.org/Windsor.Typed-Factory-Facility-delegate-based-factories.ashx
It is better because it does not depend on dynamically generation code, and it looks much more cleaner.
It looks much more cleaner because the class doesn't depend on someone overriding that method. It is impossible to test this class without subclassing.
If you really want to do something like this, i would expect:
public abstract class SomeClass
{
public abstract SomeTransient CreateTransient();
}
but... again it doesn't feel right.
Edit 2
Unity 2 support these kind of delegate factories; you can read more here:
http://www.truewill.net/myblog/index.php/2010/05/06/unity_2_0_combining_injectionfactory_and
thanks to #eiximenis

Resources