Compiled Linq with Generic Repository Design Pattern - linq

I've been looking around the web but I've yet to found any information on this. As we know Linq gives us CompiledQuery which transform the expression into T-SQL before running it. I'm trying to design a generic repository to interact with my EF but with the exception the Linq queries is compiled. If anyone could shead some light on this that would be great :)

It is hardly possible because if you want to pre-compile query you must know it. With generic repository you usually have only this:
public interface IRepository<T>
{
IQueryable<T> GetQuery();
}
So the code using a repository instance is responsible for defining the query. Pre-compilation requires concrete repository which will contain methods like:
IEnumerable<Order> GetOrdersWithHeaderAndItemsByDate(DateTime date, int take, int skip);
IEnumerable<OrderHeader> GetOrderHeadersOrderedByCustomer(int take, int skip);
etc.
Obviously you can hardly prepare such queries in generic repository beacuse they are dependent on concrete entity.

You are looking for an implementation of the Specification pattern. Basically, this is creating a Specification object that contains the information needed to filter your query. By using Specifications, you can have a Generic Repository implementation, and put your custom query logic in the specification. The specification base class looks something like:
public class Specification<TEntity>
{
public Specification(Expression<Func<TEntity, bool>> predicate)
{
_predicate = predicate;
}
public bool IsSatisfiedBy(TEntity entity)
{
return _predicate.Compile().Invoke(entity);
}
public Expression<Func<TEntity,bool>> PredicateExpression{
get{ return _predicate; }
}
private Expression<Func<TEntity, bool>> _predicate;
}
A very helpful article about implementing the specification pattern with the Entity Framework can be found at http://huyrua.wordpress.com/2010/07/13/entity-framework-4-poco-repository-and-specification-pattern/

Related

Java 8 Application layer and specific output transformation

I have a gradle multiproject with 2 subprojects trying to emulate an hexagonal architecture :
rest-adapter
application layer
I don't want the application services to expose the domain models and do'nt want to force a specific representation as output. So I would like something like application services consume 2 args (a command and something) and return a T. The client configures the service.
The rest adapter doesn't ave access to the domain model, so I can't return the domain models and let the adapter creates its representation.
What about the something. I tried :
have a signature <T> List<T> myUseCase(Command c, Function<MyDomainModel, T> fn). The application layer is the owner of transformations functions (because the signature uses MyDomainModel) and exposes a dictionnary of function. So the rest controller references one of the Fn. It works. And I'm searching of a better way. More elegant way if it exists.
have a signature <T> List<T> myUseCase(Command c, FnEnum fn) For each enum I have associated a Function. With this, I found the signature more elegant : the consumer provides which transformation it wants from an enum. But doesn't work cause the generic method doesn't compile. The cannot be resolved. Currently, I didn't find a way.
something with java 8 consumer or supplier or something else but I failed to wrap my head around.
I'm feeling there's a more elegant solution for this kind of problem : a service which accepts a function that transforms and build an output that the client provides.
I think that what you need to implement is the so called "Data Transformer" pattern.
Imagine that you have a use case that returns a certain domain object (for example "User"), but you shouldn't expose domain to clients. And you want every client to choose the format of the returned data.
So you define a data transformer interface for the domain object:
public interface UserDataTransformer {
public void write ( User user );
public String read();
}
For every output format your clients need you define a class implementing the interface. For example if you want to represent the User in XML format:
public class UserXMLDataTransformer implements UserDataTransformer {
private String xmlUser;
#Override
public void write(User user) {
this.xmlUser = xmlEncode ( user );
}
private String xmlEncode(User user) {
String xml = << transform user to xml format >>;
return xml;
}
#Override
public String read() {
return this.xmlUser;
}
}
Then you make your application service depends on the data trasnsformer interface, you inject it in the constructor:
public class UserApplicationService {
private UserDataTransformer userDataTransformer;
public UserApplicationService ( UserDataTransformer userDataTransformer ) {
this.userDataTransformer = userDataTransformer;
}
public void myUseCase ( Command c ) {
User user = << call the business logic of the domain and construct the user object you wanna return >> ;
this.userDataTransformer.write(user);
}
}
And finally, the client could look something like this:
public class XMLClient {
public static void main ( String[] args ) {
UserDataTransformer userDataTransformer = new UserXMLDataTransformer();
UserApplicationService userService = new UserApplicationService(userDataTransformer);
Command c = << data input needed by the use case >>;
userService.myUseCase(c);
String xmlUser = userDataTransformer.read();
System.out.println(xmlUser);
}
}
I've consider that the output is a String, but you could use generics maybe to return any type you want.
I haven't mentioned it, but this approach injecting the transformer into the application service follows the "port and adapters" pattern. The transformer interface would be the port, and every class implementing it would be an adapter for the desired format.
Also, this was just an example. You can use a dependency injection framework like Spring in order to create the component instances and wire them all. And also you should use the composition root pattern to do it.
Hope this example helped.
I'm feeling there's a more elegant solution for this kind of problem : a service which accepts a function that transforms and build an output that the client provides.
You are sending data across the boundary between the application and the REST layer (and presumably between the application and the REST consumer); it may be useful to think about messaging patterns.
For example, the application can define a service provider interface that defines a contract/protocol for accepting data from the application.
interface ResponseBuilder {...}
void myUseCase(Command c, ResponseBuilder builder)
The REST adapter provides an implementation of the ResponseBuilder that can take the inputs and generate some useful data structure from them.
The response builder semantics (the names of the functions in the interface) might be drawn from the domain model, but the arguments will normally be either primitives or other message types.
CQS would imply that a query should return a value; so in that case you might prefer something like
interface ResponseBuilder<T> {
...
T build();
}
<T> T myUseCase(Command c, ResponseBuilder<T> builder)
If you look carefully, you'll see that there's no magic here; we've simply switched from having a direct coupling between the application and the adapter to having an indirect coupling with the contract.
EDIT
My first solution is using a Function<MyDomainModel, T> which is a bit different from your ResponseBuilder ; but in the same vein.
It's almost dual to it. You'd probably be a little bit better off with a less restrictive signature on myUseCase
<T>
List<T> myUseCase(Command c, Function<? super MyDomainModel, T> fn)
The dependency structure is essentially the same -- the only real difference is what the REST adapter is coupled to. If you think the domain model is stable, and the output representations are going to change a lot, then the function approach gives you the stable API.
I suspect that you will find, however, that the output representations stabilize long before the domain model does, in which case the ResponseBuilder approach will be the more stable choice.

Multi-Column Search with Spring JPA Specifications

I want to create a multi field search in a Spring-Boot back-end. How to do this with a Specification<T> ?
Environment
Springboot
Hibernate
Gradle
Intellij
The UI in the front end is a Jquery Datatable. Each column allows a single string search term to be applied. The search terms across more than one column is joined by a and.
I have the filters coming from the front end already getting populated into a Java object.
Step 1
Extend JPA Specification executor
public interface SomeRepository extends JpaRepository<Some, Long>, PagingAndSortingRepository<Some, Long>, JpaSpecificationExecutor {
Step2
Create a new class SomeSpec
This is where I am lost as to what the code looks like it and how it works.
Do I need a method for each column?
What is Root and what is Criteria Builder?
What else is required?
I am rather new at JPA so while I don't need anyone to write the code for me a detailed explanation would be good.
UPDATE
It appears QueryDSL is the easier and better way to approach this. I am using Gradle. Do I need to change my build.gradle from this ?
If you don't want to use QueryDSL, you'll have to write your own specifications. First of all, you need to extend your repository from JpaSpecificationExecutor like you did. Make sure to add the generic though (JpaSpecificationExecutor<Some>).
After that you'll have to create three specifications (one for each column), in the Spring docs they define these specifications as static methods in a class. Basically, creating a specification means that you'll have to subclass Specification<Some>, which has only one method to implement, toPredicate(Root<Some>, CriteriaQuery<?>, CriteriaBuilder).
If you're using Java 8, you can use lambdas to create an anonymous inner class, eg.:
public class SomeSpecs {
public static Specification<Some> withAddress(String address) {
return (root, query, builder) -> {
// ...
};
}
}
For the actual implementation, you can use Root to get to a specific node, eg. root.get("address"). The CriteriaBuilder on the other hand is to define the where clause, eg. builder.equal(..., ...).
In your case you want something like this:
public class SomeSpecs {
public static Specification<Some> withAddress(String address) {
return (root, query, builder) -> builder.equal(root.get("address"), address);
}
}
Or alternatively if you want to use a LIKE query, you could use:
public class SomeSpecs {
public static Specification<Some> withAddress(String address) {
return (root, query, builder) -> builder.like(root.get("address"), "%" + address + "%");
}
}
Now you have to repeat this for the other fields you want to filter on. After that you'll have to use all specifications together (using and(), or(), ...). Then you can use the repository.findAll(Specification) method to query based on that specification, for example:
public List<Some> getSome(String address, String name, Date date) {
return repository.findAll(where(withAddress(address))
.and(withName(name))
.and(withDate(date));
}
You can use static imports to import withAddress(), withName() and withDate() to make it easier to read. The where() method can also be statically imported (comes from Specification.where()).
Be aware though that the method above may have to be tweaked since you don't want to filter on the address field if it's null. You could do this by returning null, for example:
public List<Some> getSome(String address, String name, Date date) {
return repository.findAll(where(address == null ? null : withAddress(address))
.and(name == null ? null : withName(name))
.and(date == null ? null : withDate(date));
}
You could consider using Spring Data's support for QueryDSL as you would get quite a lot without having to write very much code i.e. you would not actually have to write the specifictions.
See here for an overview:
https://spring.io/blog/2011/04/26/advanced-spring-data-jpa-specifications-and-querydsl/
Although this approach is really convenient (you don’t even have to
write a single line of implementation code to get the queries
executed) it has two drawbacks: first, the number of query methods
might grow for larger applications because of - and that’s the second
point - the queries define a fixed set of criterias. To avoid these
two drawbacks, wouldn’t it be cool if you could come up with a set of
atomic predicates that you could combine dynamically to build your
query?
So essentially your repository becomes:
public interface SomeRepository extends JpaRepository<Some, Long>,
PagingAndSortingRepository<Some, Long>, QueryDslPredicateExecutor<Some>{
}
You can also get request parameters automatically bound to a predicate in your Controller:
See here:
https://spring.io/blog/2015/09/04/what-s-new-in-spring-data-release-gosling#querydsl-web-support
SO your Controller would look like:
#Controller
class SomeController {
private final SomeRepository repository;
#RequestMapping(value = "/", method = RequestMethod.GET)
String index(Model model,
#QuerydslPredicate(root = Some.class) Predicate predicate,
Pageable pageable) {
model.addAttribute("data", repository.findAll(predicate, pageable));
return "index";
}
}
So with the above in place it is simply a Case of enabling QueryDSL on your project and the UI should now be able to filter, sort and page data by various combinations of criteria.

Unit testing MVC controllers that use NHibernate, with and without implementing a repository pattern

I have an MVC app that uses NHibernate for ORM. Each controller takes an ISession construction parameter that is then used to perform CRUD operations on domain model objects. For example,
public class HomeController : Controller
{
public HomeController(ISession session)
{
_session = session;
}
public ViewResult Index(DateTime minDate, DateTime maxDate)
{
var surveys = _session.CreateCriteria<Survey>()
.Add( Expression.Like("Name", "Sm%") )
.Add( Expression.Between("EntryDate", minDate, maxDate) )
.AddOrder( Order.Desc("EntryDate") )
.SetMaxResults(10)
.List<Survey>();
// other logic that I want to unit test that does operations on the surveys variable
return View(someObject);
}
private ISession _session;
}
I would like to unit test this controller in isolation, without actually hitting the database, by mocking the ISession object using Moq or RhinoMocks. However, it is going to be very difficult to mock the ISession interface in the unit test, because it is being used via a fluent interface that chains a number of calls together.
One alternative is to wrap the ISession usage via a repository pattern. I could write a wrapper class something like this:
public interface IRepository
{
List<Survey> SearchSurveyByDate(DateTime minDate, DateTime maxDate);
}
public class SurveyRepository : IRepository
{
public SurveyRepository(ISession session)
{
_session = session;
}
public List<Survey> SearchSurveyByDate(DateTime minDate, DateTime maxDate)
{
return _session.CreateCriteria<Survey>()
.Add( Expression.Like("Name", "Sm%") )
.Add( Expression.Between("EntryDate", minDate, maxDate) )
.AddOrder( Order.Desc("EntryDate") )
.SetMaxResults(10)
.List<Survey>();
}
private ISession _session;
}
I could then re-write my controller to take an IRepository constructor argument, instead of an ISession argument:
public class HomeController : Controller
{
public HomeController(IRepository repository)
{
_repository = repository;
}
public ViewResult Index(DateTime minDate, DateTime maxDate)
{
var surveys = _repository.SearchSurveyByDate(minDate, maxDate);
// other logic that I want to unit test that does operations on the surveys variable
return View(someObject);
}
private IRepository _repository;
}
This second approach would be much easier to unit test, because the IRepository interface would be much easier to mock than the ISession interface, since it is just a single method call. However, I really don't want to go down this route, because:
1) It seems like a really bad idea to create a whole new layer of abstraction and a lot more complexity just to make a unit test easier, and
2) There is a lot of commentary out there that rails against the idea of using a repository pattern with nHibernate, since the ISession interface is already a repository-like interface. (See especially Ayende's posts here and here) and I tend to agree with this commentary.
So my questions is, is there any way I can unit-test my initial implementation by mocking the ISession object? If not, is my only recourse to wrap the ISession query using the repository pattern, or is there some other way I can solve this?
Oren tends to wander around a lot. He used to be a huge proponent of Repositories and Unit of Work. He will probably swing back around again to it, but with a different set of requirements.
Repository has some very specific advantages that none of Oren's comments have quite found solutions for. Also, what he recommends has it's own set of limitaitons and problems. Sometimes I feel like he's just exchanging one set of problems for another. It's also good when you need to provide different views of the same data, such as a Web Service, or Desktop application while still keeping the web app.
Having said that, he has a lot of good points. I'm just not sure there are good solutions for them yet.
Repository is still very useful for highly test driven scenarios. It's still useful if you don't know if you will stick with a given ORM or persistence layer and might want to swap it out with another one.
Oren's solution tends to couple nHimbernate more tightly into the app. That may not be a problem in many situations, in others it might be.
His approach of creating dedicated query classes is interesting, and is sort of a first step to CQRS, which might be a better total solution. But Software development is still so much more art or craft than science. We're still learning.
Rather than mocking out ISession have you considered having your tests inherit from a base fixture that makes use of SQLite?
public class FixtureBase
{
protected ISession Session { get; private set; }
private static ISessionFactory _sessionFactory { get; set; }
private static Configuration _configuration { get; set; }
[SetUp]
public void SetUp()
{
Session = SessionFactory.OpenSession();
BuildSchema(Session);
}
private static ISessionFactory SessionFactory
{
get
{
if (_sessionFactory == null)
{
var cfg = Fluently.Configure()
.Database(FluentNHibernate.Cfg.Db.SQLiteConfiguration.Standard.ShowSql().InMemory())
.Mappings(configuration => configuration.FluentMappings.AddFromAssemblyOf<Residential>())
.ExposeConfiguration(c => _configuration = c);
_sessionFactory = cfg.BuildSessionFactory();
}
return _sessionFactory;
}
}
private static void BuildSchema(ISession session)
{
var export = new SchemaExport(_configuration);
export.Execute(true, true, false, session.Connection, null);
}
[TearDown]
public void TearDownContext()
{
Session.Close();
Session.Dispose();
}
}
Introducing repositories with named query methods does not add complexity to your system. Actually it reduces complexity and makes your code easier to understand and maintain. Compare original version:
public ViewResult Index(DateTime minDate, DateTime maxDate)
{
var surveys = _session.CreateCriteria<Survey>()
.Add(Expression.Like("Name", "Sm%"))
.Add(Expression.Between("EntryDate", minDate, maxDate))
.AddOrder(Order.Desc("EntryDate"))
.SetMaxResults(10)
.List<Survey>();
// other logic which operates on the surveys variable
return View(someObject);
}
Frankly speaking all my memory slots where already occupied BEFORE I got to the actual logic of your method. It takes time for reader to understand which criteria you are building, what parameters are you passing and which values are returned. And I need to switch contexts between lines of code. I start thinking in terms of data access and Hibernate, then suddenly I'm back to the business logic level. And what if you have several places where you need to search surveys by date? Duplicate all this staff?
And now I'm reading version with repository:
public ViewResult Index(DateTime minDate, DateTime maxDate)
{
var surveys = _repository.SearchSurveyByDate(minDate, maxDate);
// other logic which operates on the surveys variable
return View(someObject);
}
It takes me zero efforts to understand what happening here. This method has single responsibility and single level of abstraction. All data access related logic gone. Query logic is not duplicated in different places. Actually I don't care how it is implemented. Should I care at all, if main goal of this method is some other logic?
And, of course, you can write unit test for your business logic with no efforts (also if you are using TDD repository gives you ability to test your controller before you actually write data access logic, and when you will start writing repository implementation, you will have already designed repository interface):
[Test]
public void ShouldDoOtherLogic()
{
// Arrange
Mock<ISurveryRepository> repository = new Mock<ISurveryRepository>();
repository.Setup(r => r.SearchSurveyByDate(minDate, maxDate))
.Returns(surveys);
// Act
HomeController controller = new HomeController(repository.Object);
ViewResult result = controller.Index(minDate, maxDate);
// Assert
}
BTW In-memory database usage is good for acceptance testing, but for unit-testing I think its an overkill.
Also take a look at NHibernate Lambda Extensions or QueryOver in NHibernate 3.0 which use expressions to build criteria instead of strings. Your data access code will not break if you rename some field.
And also take a look on Range for passing pairs of min/max values.

Using Precompiled linq queries with the repository pattern in EF

Is it possible to use precompiled linq queries with repositories. Current I have my repositories set up like
public class CustomerRepository : EntityRepository
{
private readonly IContext _context;
public CustomerRepository(UnitOfWork uow)
{
_context = uow.context;
}
}
I would be able to create a precompiled query in the following manner by using my actual context class MyEntities : ObjectContext,IContext.
static Func<ObjectContext, int, Customer> _custByID;
public static Customer GetCustomer( int ID)
{
if (_custByID == null)
{
_custByID = CompiledQuery.Compile<MyEntities, int, Customer>
((ctx, id) => ctx.Customers.Where(c => c.CustomerID == id).Single());
}
return _custByID.Invoke(_context, ID);
}
The problem is that the Compile methods TArg0 takes in a type derived from ObjectContext. Since my whole purpose of using repositories with IContext was to hide entity framework related code using the above doesnt make sense. How should I go about using precompiled linq queries. Should I move them to a separate class library which references my model and the entity framework or is my understanding of the repositories incorrect? I am using EF4 in an ASP.net application.
The repository implementation should have the knowledge of the data access technology. The responsibility of the repository is to talk to the underlying data source inorder to satisfy the contract. It would be useless to have a repository if you can not perform such optimizations because the ObjectSet is already a repository. Creating another layer of indirection between repository and EF is a useless abstraction.

Design Patterns using IQueryable<T>

With the introduction of .NET 3.5 and the IQueryable<T> interface, new patterns will emerge. While I have seen a number of implementations of the Specification pattern, I have not seen many other patterns using this technology. Rob Conery's Storefront application is another concrete example using IQueryable<T> which may lead to some new patterns.
What patterns have emerged from the useful IQueryable<T> interface?
It has certainly made the repository pattern much simpler to implement as well. You can essentially create a generic repository:
public class LinqToSqlRepository : IRepository
{
private readonly DataContext _context;
public LinqToSqlRepository(DataContext context)
{
_context = context;
}
public IQueryable<T> Find<T>()
{
return _dataContext.GetTable<T>(); // linq 2 sql
}
/** snip: Insert, Update etc.. **/
}
and then use it with linq:
var query = from customers in _repository.Find<Customer>()
select customers;
I like the repository-filter pattern. It allows you to separate concerns from the middle and data end tier without sacrificing performance.
Your data layer can concentrate on simple list-get-save style operations, while your middle tier can utilize extensions to IQueryable to provide more robust functionality:
Repository (Data layer):
public class ThingRepository : IThingRepository
{
public IQueryable<Thing> GetThings()
{
return from m in context.Things
select m; // Really simple!
}
}
Filter (Service layer):
public static class ServiceExtensions
{
public static IQueryable<Thing> ForUserID(this IQueryable<Thing> qry, int userID)
{
return from a in qry
where a.UserID == userID
select a;
}
}
Service:
public GetThingsForUserID(int userID)
{
return repository.GetThings().ForUserID(userID);
}
This is a simple example, but filters can be safely combined to build more complicated queries. The performance is saved because the list isn't materialized until all the filters have been built into the query.
I love it because I dislike application-specific repositories!

Resources