What is the difference between table-per-service and schema-per-service? - microservices

My question should be self explanatory but I am learning about microservices and the three types of database configurations to use with them-
Database-per-service
Schema-per-service
Table-per-service
Assigning each service it's own database is an easy concept to understand but I am failing to understand the distinction between 2 and 3 and all my googling returns the similarly vague explanations - a private set of tables vs a private schema, isn't that basically the same thing?

Related

Multiple Service in Spring Boot App, is it good practice?

I am writing my first bigger API, which is based on three entities (cinema, movies, movie properties). I try divide my methods like below:
MovieServiceImpl:
saveMovie(), findMovieByID(), deleteMovieByID,
showMovieWithCinemasList(), enrolledPropertyToMovie()
CinemaServiceImpl
saveCinema(), enrolledMovieToCinema(), showCinemasWithMovieList()
PropertyServiceImpl
saveProperty(), findPropertyByID()
ReservationServiceImpl
showFreePlaceOnMovie(), showDateChosenMovie(), showRepertoire(),
showCinemasWithMoviesLisT(), multiplePlaceReservation()
Question
Whether it is compatible with the Single Responsibility Principle (SRP).
And it is a good practice to divide it into smaller Services if not what should I do?
Your question is a bit unclear.
Single Responsibility Principle mandates single responsibility for a single structural unit of work (class, module, interface, method, whateverunityourlanguage has), i.e. one entity/unit responsible(/solving) one particular problem.
For instance, if you have a method:
Person getPersonById(int id) {...}
you should never implement validation or authentication logic in it; instead, you should delegate that responsibility to another layer/object.
This has almost nothing to do with "can I have several types in the application?", and to be honest, I can't think of any application (unless it's a really small console app), which doesn't have several structural units (entities, like class, interface, etc.) in itself.
Your example doesn't show anything about SRP, but rather solely demonstrates, that you have different types for different business domain models, and that's much, much better way of designing application/types, in contrast to having one messy class for working with different domain objects/logic.
If, however, you violate SRP in the methods of those classes, that's another story, and we can't address it here, as you don't show implementations.
I still think, that your question was more about having different types, instead of incorporating all of them in one (a horrible idea!), and in your example, Movie, Cinema and Property are three different domain entities, which is - again - good.
Regarding:
And it is a good practice to divide it into smaller Services if not what should I do?
It's unclear what do you mean in "smaller services" and what is your question about. It seems to me, that you're working with nTier/Layered architecture, and only thing I can tell you, that - yes, it's good to separate responsibility layers.
Blockquote
I still think, that your question was more about having different types, instead of incorporating all of them in one (a horrible idea!), and in your example, Movie, Cinema and Property are three different domain entities, which is - again - good.
Yes my question was just about it!
And by the way, you made me realize that it should separate the validations to other methods so that my code becomes better and meets the SRP as well.
Thanks for the answer, although it seemed logical to me, I needed confirmation from someone experienced. Thank you very much!!!

Laravel Repository pattern and many to many relation

In our new project we decided to use hexagonal architecture. We decided to use repository pattern to gain more data access abstraction. We are using command bus pattern as service layer.
In our dashboard page we need a lot of data and because of that we should use 3 level many to many relations (user -> projects -> skills -> review) and also skills should be active(status=1).
The problem rises here, where should i put this?
$userRepository->getDashboardData($userId).
2.$userRepository->getUser($userId)->withProjects()->withActiveSkills()->withReviews();
3.$user = $userRepository->getById();
$projects = $projectRepository->getByUserId($user->id);
$skills = $skillRepository->getActiveSkillsByProjectsIds($projectIds);
In this case, I couldn't find the benefits of repository pattern except coding to interface which can be achived with model interfac.
I think solution 3 is prefect but it adds a lot of work.
You have to decide (for example) from an object-oriented perspective if a "User" returned is one that has a collection of skills within it. If so, your returned user will already have those objects.
In the case of using regular objects, try to avoid child entities unless it makes good sense. Like, for example.. The 'User' entity is responsible for ensuring that the child entities play by the business rules. Prefer to use a different repository to select the other types of entities based on whatever other criteria.
Talking about a "relationship" in this way makes me feel like you're using ActiveRecord because otherwise they'd just be child objects. The "relationship" exists in the relational database. It only creeps into your objects if you're mixing database record / object like with AR.
In the case of using ActiveRecord objects, you might consider having specific methods on the repository to load the correctly configured member objects. $members->allIncludingSkills() or something perhaps. This is because you have to solve for N+1 when returning multiple entities. Then, you need to use eager-loading for the result set and you don't want to use the same eager loading configuration for every request.. Therefore, you need a way to delineate configurations per request.. One way to do this is to call different methods on the repository for different requests.
However, for me.. I'd prefer not to have a bunch of objects with just.. infinite reach.. For example.. You can have a $member->posts[0]->author->posts[0]->author->posts[0]->author->posts[0].
I prefer to keep things as 'flat' as possible.
$member = $members->withId($id);
$posts = $posts->writtenBy($member->id);
Or something like that. (just typing off the top of my head).
Nobody likes tons of nested arrays and ActiveRecord can be abused to the point where its objects are essentially arrays with methods and the potential for infinite nesting. So, while it can be a convenient way to work with data. I would work to prevent abusing relationships as a concept and keep your structures as flat as possible.
It's not only very possible to code without ORM 'relationship' functionality.. It's often easier.. You can tell that this functionality adds a ton of trouble because of just how many features the ORM has to provide in order to try to mitigate the pain.
And really, what's the point? It just keeps you from having to use the ID of a specific Member to do the lookup? Maybe it's easier to loop over a ton of different things I guess?
Repositories are really only particularly useful in the ActiveRecord case if you want to be able to test your code in isolation. Otherwise, you can create scopes and whatnot using Laravel's built-in functionality to prevent the need for redundant (and consequently brittle) query logic everywhere.
It's also perfectly reasonable to create models that exist SPECIFICALLY for the UI. You can have more than one ActiveRecord model that uses the same database table, for example, that you use just for a specific user-interface use-case. Dashboard for example. If you have a new use-case.. You just create a new model.
This, to me.. Is core to designing systems. Asking ourselves.. Ok, when we have a new use-case what will we have to do? If the answer is, sure our architecture is such that we just do this and this and we don't really have to mess with the rest.. then great! Otherwise, the answer is probably more like.. I have no idea.. I guess modify everything and hope it works.
There's many ways to approach this stuff. But, I would propose to avoid using a lot of complex tooling in exchange for simpler approaches / solutions. Repository is a great way to abstract away data persistence to allow for testing in isolation. If you want to test in isolation, use it. But, I'm not sure that I'm sold much on how ORM relationships work with an object model.
For example, do we have some massive Member object that contains the following?
All comments ever left by that member
All skills the member has
All recommendations that the member has made
All friend invites the member has sent
All friends that the member has established
I don't like the idea of these massive objects that are designed to just be containers for absolutely everything. I prefer to break objects into bits that are specifically designed for use-cases.
But, I'm rambling. In short..
Don't abuse ORM relationship functionality.
It's better to have multiple small objects that are specifically designed for a use-case than a few large ones that do everything.
Just my 2 cents.

How to correctly use Spring Data Repository#save()?

In Spring Data Repository interfaces, the following operation is defined:
public T save(T entity);
... and the documentation states that the application should continue working with the returned entity.
I know about the reasoning behind this decision, and it makes sense. I can also see that this works perfectly fine for simple models with independent entities. But given a more complex JPA model with lots of #OneToMany and #ManyToMany connections, the following question arises:
How is the application supposed to use the returned object, when all the rest of the loaded model still references the old one that was passed into save(...)? Also, there might be collections in the application that still contain the old entity. The JVM does not allow to globally "swap" the unsaved entity with the saved one.
So what is the correct usage pattern? Any best practices? I only encountered toy examples so far that do not use #OneToMany or #ManyToMany and thus don't run into this issue. I'm sure that a lot of smart people thought long and hard about this, but I can't see how to use this properly.
This is covered in section 3.2.7.1 of the JPA specification that describes how merge should work. In a nutshell, if the instance being saved is managed (existing), it is simply saved in-place. If not, it is copied to a managed instance (which may not necessarily be a different object since the spec does not mandate that a new instance must be created in this case) and all references from the instance being saved to other managed entities are also updated to refer to the managed instance. This of course requires that the relationships have been correctly defined from the entity being saved.
Indeed, this does not cover the case of storing an entity instance in an unmanaged collection (such as a static collection). That is anyways not advisable because a persisted entity must always be loaded through the persistence provider mechanism (who knows the entity instance may have changed in the persistent store).
Since I have been using JPA for the past many years and have never faced problems, I am confident that the section I have referenced above works well in all scenarios (subject to the JPA provider implementing it as intended). You should try some of the cases that worry you and post separate questions if you run into problems.

Basic Entity Framework Questions

I have an existing database, which I have been happily accessing using LINQtoSQL. Armed with Sanderson's MVC3 book I thought I'd have a crack at EF4.3, but am really fighting to get even basic functionality working.
Working with SQL 2008, VS2010, the folder architecture appears to be:
ABC.Domain.Abstract
ABC.Domain.Concrete
ABC.Domain.Concrete.ORM
ABC.Domain.Entities
Per examples, repository interfaces are abstract, actual repositories are concrete. Creating EDMX from the existing database puts that in the ORM folder and the Entities holds the classes I designed as part of the domain. So far so good.
However! I have not once persuaded the deceptively simple EfDbContext : DbContext class, with method to work...
public DbSet<ABC.Domain.Entities.Person> Person { get { return _context.Persons; }}
It complains about missing keys, that Person is not a entity class, that it cannot find the conceptual model, and so on.
Considering I have a basic connectionstring in the web.config, why is not creating a model on the fly to do simple matching?
Should the ORM folder exist, or should it simply be Concrete? (I have a .SQL subfolder for LINQtoSQL concret so it suits me to have .ORM but if it's a flaw, let's fix it).
Should I have my homespun entities AND the automatically produced ones or just one set?
The automatic ones inherit from EntityObject, mine are just POCO or POCO with complexTypes, but do not inherit from anything.
What ties the home designed Domain.Entities.Person type to the Persons property of the Context?
Sanderson's book implies that the matching is implicit if properties are identical, which they are, but that does not do it.
The app.config has an EF flavoured connection string in it, the web.config has a normal connection string in it. Which should I be using - assuming web.config at the moment - so do I delete app.config?
Your help is appreciated. Long time spent, no progress for some days now.
What ties the home designed Domain.Entities.Person type to the Persons
property of the Context?
You seem to have a misunderstanding here. Your domain entities are the entities for the database. There aren't two sets. If you actually want to have two sets of object classes (for whatever reason) you must write any mapping between the two manually. EF only knows about the classes which are part of the entity model.
You should also - if you are using EF 4.3 - apply the DbContext Generator T4 template to the EDMX file. Do not work with EntityObject derived entities! It is not supported with DbContext. The generator will build a set of POCO classes and prepare a derived DbContext. This set of POCO classes are the entities the DbContext will only know about and they should be your only set of domain entities.
The created DbContext will contain simple DbSet properties with automatic getters and setters...
public DbSet<Person> People { get; set; }
...and the Person class will be created as POCO as well.
Download the entity framework power tools:
http://visualstudiogallery.msdn.microsoft.com/72a60b14-1581-4b9b-89f2-846072eff19d
Right click in your project to 'reverse engineer an existing database' it will create the code classes for you. No need to use EDMX ,and this method will create the DbContext derived class for you
There are many questions here and you won't get an answer, but I'll stick my 5 pence for what it's worth.
Sanderson's MVC3 book
Your problems are not to do with MVC3, they are to do with Entity Framework and data persistence layer.
ABC.Domain.Abstract ABC.Domain.Concrete ABC.Domain.Concrete.ORM
ABC.Domain.Entities
Can you say why this is separated in such a way? I would argue and say that ABC.Domain should contain your POCOs independent of your persistence layer (EF) and your presentation layer (MVC). Your list implies that your domain contains ORM and your data access entities. I'm not arguing here, what I'm trying to say, is that you need to understand what you really need.
At the end of a day, I'm certain that a simple example would suffice with ABC.DataAccess, ABC.Domain and ABC.Site.
Do you understand why repositories are abstract and concrete? If you don't, then leave out interfaces and see whether you can improve it with interfaces later.
Person is not a entity class, that it cannot find the conceptual
model, and so on.
Now, there are multiple ways you can get EF to persist data for you. You can use code first, where, as the name implies, you will write code first, and EF will generate database, relations and all the relevant constraints for you.
You can use database first, where EF will generate relevant class and data access related objects from your database. This is less preferable method for me, as it relies heavily upon your database structure.
You can use model first, where you will design your class in EDMX designer and it will then generate relevant SQL for you.
All of these might sound like a bit of black box, but for what you are trying to achieve all of them will work. EDMX is a good way to learn and there are many step by step tutorials on ASP.Net.
but if it's a flaw, let's fix it).
You will have to fix and refactor yourself, there is no other way to improve in my honest opinion. I can give you a different folder/namespace structure, but there will always be a "better" one.
Should I have my homespun entities AND the automatically produced ones
or just one set?
Now this depends on the model that you have chosen. Database first, code first, code only and whatever else is there. If you are following domain driven development, then you will have to work with classes, that represent your business logic and that are not tied up to your data persistence layer or presentation layers, therefore POCO is a way forward.
What ties the home designed Domain.Entities.Person type to the Persons
Now this again depends on the model that you are using.
The app.config and web.config
When you are running your web application, the connection string from web application will be used. Please correct me if I'm wrong.
Your help is appreciated. Long time spent, no progress for some days
now.
General advise, leave MVC alone for the time being. Get it to work in a console application and make sure you feel comfortable with options offered in EF. Good luck :)
The solution to why nothing worked code-first...
...turned out to be a reference to System.Data.EntityClient in the connection string, which ought to have read System.Data.SqlClient.
Without this provider entry being correct, it was unable to work code-first.
Finding which connectionString it was using was a case of deliberately mis-spelling a keyword in the connections there were to choose from - they all were named correctly - but were in app.config, and 2 places in the web.config. With a distinct naming error, when the application threw an error trying to create the domain model, it was easy to identify which connection string my derived DbContext class was using. Correcting the ProviderName made all the difference.
Code-first is now working just fine, with seeded values on model changes.

Persistence framework?

I'm trying to decide on the best strategy for accessing the database. I understand that this is a generic question and there's no a single good answer, but I will provide some guidelines on what I'm looking for.
The last few years we have been using our own persistence framework, that although limited has served as well. However it needs some major improvements and I'm wondering if I should go that way or use one of the existing frameworks. The criteria that I'm looking for, in order of importance are:
Client code should work with clean objects, width no database knowledge. When using our custom framework the client code looks like:
SessionManager session = new SessionManager();
Order order = session.CreateEntity();
order.Date = DateTime.Now;
// Set other properties
OrderDetail detail = order.AddOrderDetail();
detail.Product = product;
// Other properties
// Commit all changes now
session.Commit();
Should as simple as possible and not "too flexible". We need a single way to do most things.
Should have good support for object-oriented programming. Should handle one-to-many and many-to-many relations, should handle inheritance, support for lazy loading.
Configuration is preferred to be XML based.
With my current knowledge I see these options:
Improve our current framework - Problem is that it needs a good deal of effort.
ADO.NET Entity Framework - Don't have a good understanding, but seems too complicated and has bad reviews.
LINQ to SQL - Does not have good handling of object-oriented practices.
nHibernate - Seems a good option, but some users report too many archaic errors.
SubSonic - From a short introduction, it seems too flexible. I do not want that.
What will you suggest?
EDIT:
Thank you Craig for the elaborate answer. I think it will help more if I give more details about our custom framework. I'm looking for something similar. This is how our custom framework works:
It is based on DataSets, so the first thing you do is configure the
DataSets and write queries you need there.
You create a XML configuration file that specifies how DataSet tables map to objects and also specify associations between them (support for all types of associations).
3.A custom tool parse the XML configuration and generate the necessary code.
4.Generated classes inherit from a common base class.
To be compatible with our framework the database must meet these criteria:
Each table should have a single column as primary key.
All tables must have a primary key of the same data type generated on the
client.
To handle inheritance only single table inheritance is supported. Also the XML file, almost always offers a single way to achieve something.
What we want to support now is:
Remove the dependency from DataSets. SQL code should be generated automatically but the framework should NOT generate the schema. I want to manually control the DB schema.
More robust support for inheritance hierarchies.
Optional integration with LINQ.
I hope it is clearer now what I'm looking for.
Improve our current framework - Problem is that it needs a good deal of effort
In your question, you have not given a reason why you should rewrite functionality which is available from so many other places. I would suggest that reinventing an ORM is not a good use of your time, unless you have unique needs for the ORM which you have not specified in your question.
ADO.NET Entity Framework
We are using the Entity Framework in the real world, production software. Complicated? No more so than most other ORMs as far as I can tell, which is to say, "fairly complicated." However, it is relatively new, and as such there is less community experience and documentation than something like NHibernate. So the lack of documentation may well make it seem more complicated.
The Entity Framework and NHibernate take distinctly different approaches to the problem of bridging the object-relational divide. I've written about that in a good bit more detail in this blog post. You should consider which approach makes the most sense to you.
There has been a great deal of commentary about the Entity Framework, both positive and negative. Some of it is well-founded, and some of the seems to come from people who are pushing other solutions. The well-founded criticisms include
Lack of POCO support. This is not an issue for some applications, it is an issue for others. POCO support will likely be added in a future release, but today, the best the Entity Framework can offer is IPOCO.
A monolithic mapping file. This hasn't been a big issue for us, since our metadata is not in constant flux.
However, some of the criticisms seem to me to miss the forest for the trees. That is, they talk about features other than the essential functionality of object relational mapping, which the Entity Framework has proven to us to do very well.
LINQ to SQL - Does not have good handling of object-oriented practices
I agree. I also don't like the SQL Server focus.
nHibernate - Seems a good option, but some users report too many archaic errors.
Well, the nice thing about NHibernate is that there is a very vibrant community around it, and when you do encounter those esoteric errors (and believe me, the Entity Framework also has its share of esoteric errors; it seems to come with the territory) you can often find solutions very easily. That said, I don't have a lot of personal experience with NHibernate beyond the evaluation we did which led to us choosing the Entity Framework, so I'm going to let other people with more direct experience comment on this.
SubSonic - From a short introduction, it seems too flexible. I do not want that.
SubSonic is, of course, much more than just an ORM, and SubSonic users have the option of choosing a different ORM implementation instead of using SubSonic's ActiveRecord. As a web application framework, I would consider it. However, its ORM feature is not its raison d'ĂȘtre, and I think it's reasonable to suspect that the ORM portion of SubSonic will get less attention than the dedicated ORM frameworks do.
LLBLGen make very good ORM tool which will do almost all of what you need.
iBATIS is my favourite because you get a better grain of control over the SQL
Developer Express Persistence Objects or XPO as it is most known. I use it for 3 years. It provides everything you need, except that it is commercial and you tie yourself with another (single company) for your development. Other than that, Developer Express is one of the best component and framework providers for the .NET platform.
An example of XPO code would be:
using (UnitOfWork uow = new UnitOfWork())
{
Order order = new Order(uow);
order.Date = DateTime.Now();
uow.CommitChanges();
}
I suggest taking a look at the ActiveRecord from Castle
I don't have production experience with it, I've just played around with their sample app. It seems really easy to work with, but I don't know it well enough to know if it fits all your requirements

Resources