Imagine, for example, you have front-end and back-end applications. They are both written in different technologies, lets say backend is in python using django and front end in typescript using angular.
Now there will be some data that will need to be shared between those two. Enums, serialized dictionaries of class instances or names of some fields.
Very quickly a problem arises of data structure duplication and possibility of desynchronisation. (E.g. you have to have to exact enums on both platforms)
I was wondering are there any "best practices" out there?
Like XML based standardizing data or something?
Could you point me to some books / articles?
Could you share your knowledge of how you do this?
Thank you.
I know two ways to solve this obstacle:
Write your own clients for consumers.
Use model contracts like RAML and generate models from declarations.
And you don't have to have the same exact enum or any other class on both platforms. There should be a layer that consumes and returns any data to the outside world(external client). This layer has its own models. Everything that lies below should have their own. You can have many small objects stored in database, but return to the client huge aggregates and that would be different models. Read about data models in standard N-tier application.
Related
I am new to NestJs and GraphQl, I am learning going over some tutorials. It appears to be an inconstancy in the usage of the terminology model or entity. The nestjs schematics resource generator for graphql code first produces entities, yet the example shown on their website use models.
produces entities:
nx generate #nestjs/schematics:resource generated --language=ts --type=graphql-code-first
uses models no mention of entities in code first approach
https://docs.nestjs.com/graphql/resolvers
which one terminology is most appropriate?
Thank You,
Michael
Both are generally correct. It comes down to naming preferences.
I view entities as database entities, or database table maps. They map from your database data to a class representation that your code will understand. Models can also be used for this, which I believe is the term that sequilize and mongoose prefer to use.
Models, as described in the docs you linked, are generally your DTOs, your schema objects that you expect the API to accept and respond with.
You'll notice that the generator also generates two #InputType() files as well, which will be more closely tied to your incoming DTO while the entity.ts will be closer to your response DTO.
So, both are correct, and it comes down to naming preferencec.
I’m new to F# but would like to start building an API with it.
I have previously built C# API’s. In those we layer the architecture between models (being in the controllers), the domain DTO’s in the services and finally in the data layer if there needs to be any persistence.
Between these layers we use Automapper to map the types up and down the layers keeping a clean separation, for example none of our domain objects can have anything null in them (we use a maybe/option) and cater for this in our mappings between the objects.
I have seen examples of building API’s with F# and lots of getting started, but no good examples on architecture. I have also seen people try to use Automapper with F# but it seems shoehorned in and you don't see anyone use it in any of the example tutorials on the net, at least that I have found.
I’m just wondering what people do. Do you use something like Automapper in F#, do you keep a separation between your models and domain DTO’s? do you care? Am I thinking of this in too much of an oop way?
The closest I have come to any help has been on https://fsharpforfunandprofit.com/posts/recipe-part3/
I’m wondering what people do. Do you keep a separation between your models and domain DTO?
It's not that common but we prefer to keep everything manual. We don't create DTOs.
If we want to transfer objects we serialize and deserialize manually without creating additional objects.
DTOs are often used as an aid to serialization, but then you have to do just as much work to write the object <-> DTO mapping and end up with slow, reflection-based, unoptimized serialization.
Until now, in my MVC application, I've been using the Model mainly just to access the database, and very little else. I've always looked on the Controller as the true brains of the operation. But I'm not sure if I've been correctly utilizing the MVC model.
For example, assume a database of financial transactions (order number, order items, amount, customer info, etc.). Now, assume there is a function to process a .csv file, and return it as an array, to be inserted into the database of transactions.
I've placed my .csv parse function in my Controller, then the controller passes the parsed information to a function in the Model to be inserted. However, strictly speaking, should the .csv parsing function be included in the Model instead?
EDIT: For clarity's sake, I specifically am using CodeIgniter, however the question does pertain to MVC structure in general.
The internet is full of discussion about what is true MVC. This answer is from the perspective of the CodeIgniter (CI) implementation of MVC. Read the official line here.
As it says on the linked page "CodeIgniter has a fairly loose approach to MVC...". Which, IMO, means there aren't any truly wrong ways to do things. That said, the MVC pattern is a pretty good way to achieve Separation of Concerns (SoC) (defined here). CI will allow you to follow the MVC pattern while, as the linked documentation page says, "...enabling you to work in a way that makes the most sense to you."
Models need not be restricted to database functions. (Though if that makes sense to you, then by all means, do it.) Many CI developers put all kinds of "business logic" in Models. Often this logic could just as easily reside in a custom library. I've often had cases where that "business logic" is so trivial it makes perfect sense to have it in a Controller. So, strictly speaking - there really isn't any strictly speaking.
In your case, and as one of the comments suggests, it might make sense to put the CSV functionality into a library (a.k.a. service). That makes it easy to use in multiple places - either Controller or Model.
Ultimately you want to keep any given block of code relevant to, and only to, the task at hand. Hopefully this can be done in a way that keeps the code DRY (Don't Repeat Yourself). It's up to you to determine how to achieve the desired end result.
You get to decide what the terms Model, View, and Controller mean.
As a general rule MVC is popular because it supports separation of concerns, which is a core tenet of SOLID programming. Speaking generically (different flavors support/ recommend different implementations), your model holds your data (and often metadata for how to validate or parse), your view renders your data, and your controller manages the flow of your data (this is also usually where security and validation occur).
In most systems, the Single Responsibility Principle would suggest that while business logic must occur at the controller level, it shouldn't actually occur in the controller class. Typically, business logic is done in a service, usually injected into the controller. The controller invokes the service with data from the model, gets a result that goes into the model (or a different model), and invokes the view to render it.
So in answer to your question, following "best practices" (and I'll put that in quotes because there's a lot of opinions out there and it's not a black and white proposition), your controller should not be processing and parsing data, and neither should your model; it should be invoking the service that processes and parses the data, then returning the results of aforementioned invocation.
Now... is it necessary to do that in a service? No. You may find it more appropriate, given the size and complexity of your application (i.e. small and not requiring regular maintenance and updates) to take some shortcuts and put the business logic into the controller or the model; it's not like it won't work. If you are following or intend to follow the intent of the Separation of Concerns and SOLID principles, however (and it's a good idea on larger, more complex projects), it's best to refactor that out.
Back to the old concept of decomposing the project logic as Models and Business Logic and the Data Access Layer.
Models was the very thin layer to represent the objects
Business Logic layer was for validations and calling the methods and for processing the data
Data Access Layer for connecting the database and to provide the OR relation
in the MVC, and taking asp.net/tutorials as reference:
Models : to store all the object structure
View: is like an engine to display the data was sent from the controller ( you can think about the view as the xsl file of the xml which is models in this case)
Controller: the place where you call the methods and to execute the processes.
usually you can extend the models to support the validations
finally, in my opinion and based on my experience, most of the sensitive processes that take some execution time, I code it on the sql server side for better performance, and easy to update the procedures in case if any rule was injected or some adjustments was required, all mentioned can be done without rebuilding your application.
I hope my answer gives you some hints and to help you
If your CSV processing is used in more than one place, you can use a CI library to store the processing function. Or you can create a CSV model to store the processing function. That is up to you. I would initially code this in the controller, then if needed again elsewhere, that is when I would factor it out into a library.
Traditionally, models interact with the database, controllers deal with the incoming request, how to process it, what view to respond with. That leaves a layer of business logic (for instance your CSV processing) which I would put in a library, but many would put in its own model.
There is no hard rule about this. MVC, however it was initially proposed, is a loose term interpreted differently in different environments.
Personally, with CI, I use thin controllers, fat models that also contain business logic, and processing logic like CSV parsing I would put in a library, for ease of reuse between projects.
I've worked with OOP for a while now, and have gotten into the habit of creating classes for 'things', such as person, account, etc. I was coding in Java for this.
Recently I begun to work with MVC (in PHP), and I've come to realise that, contrary to what I originally thought, the model is not like an OOP class - feel free to correct me if I'm wrong, but my understanding is that the model is simply an interface between the controller and the database (maybe with data processing - more on this below). My reason for thinking this stems from my recent messing around with the CodeIgniter framework for PHP. CI doesn't allow for instances of models. Rather, it's a singleton pattern, and in most tutorials I've seen, it's used only for database queries and sometimes some static method.
Now, coming from OOP, I've gotten into the habit of having classes where the data is stored and able to be manipulated (an OOP class). My issue is that in MVC, I'm not sure where this takes place, if it does (I initially thought 'class' was synonymous with 'model'). So, I guess what I'm looking for is someone to explain to me:
Where does the manipulation of data (business logic) take place? I've read many articles and posts, and it seems some prefer to do it in the controller, and others in the model. Is one of these more correct than the other in terms of MVC?
Where/how do I store data for use within my application, such as that from a database or JSON/XML files returned from an API call? I'm talking things that would usually be attributes in an OOP class. Does this even still take place, or is it straight from the database to the view without being stored in variables in a class?
If you could link me to any guides/sites that may help me to understand this all better, I would appreciate it.
Representing MVC in terms of classes, you can say that:
Model - A class which stores/provides data.
View - A class which provides functionality to display the provided data in a particular fashion.
Controller - A class which controls that how the data is manipulated and passed around.
MVC design is adopted to provide facility to easily replace any of the module (Model/View/Controller) without affecting the other. In most cases, it is the view which gets changed (like same data represented in form of graphs, chats) but other modules must also be made independent of others.
This is an interesting question, and you often hear contrary things about how clean and stripped down models should be vs them actually doing everything you'd think they should do. If I were you I'd take a look at data mappers. Tons of people much smarter than I have written on the subject, so please do some research, but I'll see if I can summarize.
The idea is that you separate out your concept of a model into two things. The first is the standard model. This simple becomes a property bag. All it does is have a state that often reflects your database data, or what your database data will be once saved, or however you use it.
The second is a data mapper. This is where you put the heavy-lifting stuff. It's job becomes providing a layer between your pure model, and the database, that's it. It talks to the database, retrieves data specific to your model, and maps it to the model. Or it reads the model, and writes that data to the database. Or it compares the model's data with that of the DB.
What this does is it allows each layer to concern itself with only one thing. The model only has to keep a state, and be aware of its state. Or have some functionality related to the model that does not involve the database or storage. Now the model no longer cares about talking to the database which is hugely freeing! Now if you ever switch to a different database, or you switch to cookies or storing things in files or caching or any other form of persistence, you never need to change your models. All you have to change is the mapping layer that communicates between your models and the DB.
Suppose you have an N-Tier, MVC architecture of your (java or *) application. You chose MVC for its logical, manageable partitioning of the application code.
Suppose you are into Java, since there are many frameworks or tools out there, you don't want to tie a Layer (M or V or C) to a specific technology, and would like to maintain the freedom for later upgrades or even integration with other parts.
Now, in every scenario, a Model may contain primitive data types (like Int, String..etc), or more advanced ones in the form of classes (OO features for max complexity).
How would you deal with that:
keep your Model as POJO as you can, and implement the typing at the V or C layer, since it is mostly about data input and validation . This way, you don't have to worry about your Model being used elsewhere, almost nothing will change at the DAL as well, but you have to do extra effort to convert your model to a rich-model (rich in types); when you transfer it from the storage to the view
enable your Model to support types as other nested Models, and implement V or C layer tricks to deal with it, each Layer will have to process the model-complex, instead of just using getters and setters. However this way, you keep the model-complex in other technological settings, which should be adapted accordingly.
[none of the above or else]
I think the technology is irrelevant if you wanna go fully distributed. I would create separate apps for each layer, which you can easily cluster and then choose a standardized communication format like JSON over HTTP(s).
JSON can be as simple or complex as you like and pretty much every language out there has support for it built in.
HTTP is fast and simple, and very easy to scale.
Just my two cents.