UML diagram (Class Diagram) of whole Spring project in STS - spring

I have developed a REST API's Project and I want to draw a class diagram for every API individually. I have downloaded the eclipse plugin of PlantUML but this doesn't generate the complete API class diagram it only generates for each class. Is there any tool which can help me to generate the class diagram for each API. I need a class diagram having all required classes involved in an API with proper relationships like association, inheritance, etc.
for example, I have created one myself but I am not able to verify it for correction from anywhere.
In this class diagram, a request for client information (list of items) comes to ClientProfileContoller and ClientProfileContoller calls a ClientProfileService method which returns a List of objects of ClientProfileVO after performing respective DB Operations. ClientProfileContoller uses ResponseUtility class to perform common operations and at last List of ClientProfileVO objects are added to BaseResponse using its setData(List) method and BaseResponse is returned to ClientProfileContoller. ClientProfileContoller sends BaseResponse as a final response to the client.

One way to solve your problem is to use a recent open-source project based on the use of JDT named : Spoon, now available on Github.
It offers a simple, intuitive API to program your own Java analysis and even more.
As a proof of concept, during my bachelor we were introduced to this API to do many (semi)automated refactoring (mainly for monoliths migration to microservices) and automated Control Flow Graph analysis

Related

Two approaches to implementing REST API on Spring

I do REST API on Spring. Took a course in Spring Data Hibernate and found that it made the REST API the most time-consuming way.
When I added a new entity to the domain, I went through the following chain of objects:
Entity - domain object
DTO - for transmitting/receiving an object to/from a client
Mapper - to convert between Entity and DTO
Repository - for interacting with the database
RestController - for processing API requests
Service - service class for the object
The approximate chain of my actions was as follows:
RestController processes requests - receives DTO from the client (in case of creation of a new object)
Mapper in controller converts DTO to Entity
Service is called
Service accesses the Repository
Repository returns the result of execution (created by Entity)
Service returns Entity is created in RestController
RestController returns to the client an object of type ResponseEntity, where I put the body and response code.
As you can see a large chain of actions and a large number of objects.
But then I found out that if you use Spring Data REST, all this doesn't need all the API supplied by Spring from the box. In general, you only need to create an Entity and Repository.
It turns out that for typical CRUD-type operations, I wrote a lot of controllers and their methods in vain.
Questions:
When should I use RestConroller, and when is Spring Data REST?
Is it possible to combine two approaches for one Entity? It turns out that I was wasting my time writing for simple operations like creating, getting, saving, deleting controllers, it can be moved to Spring Data REST.
Will I be able to implement some of the actions that I did in Spring Data Rest in RestConroller? Such as:
Return an entity property value as id instead of object? I mean I have properties for entities that are entities themselves, for these fields I sometimes need to return their ID instead of the whole entity.
Is there any way to control error handling? In RestController I have implemented the ResponseEntityExceptionHandler extension class and all errors wherever they occur in my RestController are handled in the same way in one place and I always know that all errors will return approximately the same response structure.
Data validation will have to be hinged on the fact that it used to be validated on DTOs received from the client. Are there any nuances waiting for me in this regard?
I'm a little stumped on how to move forward. Give me your recommendations and thoughts on this. Push forward on what to use and how.
What Spring Data REST can do for you is scaffolding of the plain repository to rest service. It is much faster, and in theory it should be flexible, but in practice it is hard to achieve something more than REST access to your repositories.
In production I've used Spring Data REST as a wrapper of the database - in a service/microservice architecture model you just wrap-up sometimes the core DB into such layer in order to achieve DB-agnostic Application. Then the services will apply the business logic on top of this wrapper and will provide API for the front-end.
On the other hand Spring Data Rest(SDR) is not suitable if you plan to use only these generated endpoints, because you need to customize the logic for fetching data and data manipulation into Repoitories/Services. You can combine both and use SDR for the "simple" entities, where you need only the basic CRUD over them, and for the complex entities to go with the standard approach, where you decouple the entity from the endopint and apply your custom business logic into the services. The downside of mixing up both strategies is that your app will be not consistent, and some "things" will happen out-of-the-box, which is very confusing for a new developer on this project.
It loooks wasted time and efforts to write these classes yourself, but it only because your app doesn' have a complex database and/or business logic yet.
In short - the "standard" way provides much bigger flexibility at the price of writing repetetive code in the beginning.
You have much more control building the full stack on your own, you are using DTO's instead of returning the entity objects, you can combine repositories in your services and you can put your business logic on the service layer. If you are not doing anything of the above (and you don't expect to in the near future) there is no need for writing all that boilerplate yet over again, and that's when Spring Data REST comes into play.
This is an interesting question.
Spring Data Rest provides abstraction and takes a most of the implementation in its hand. This is helpful for small applications where the business logic resides at the repository layer. I would choose this for applications with simple straight forward business logic.
However if I need fine grained control (eg: transaction, AOP, unit testing, complex business decisions etc. ) at each of the layers as you mentioned which is most often needed for large scale applications I will prefer writing each of these layers.
There is no thumb rule.

Implement a custom Spring Data Repository for a non-supported database

I want to implement a Spring Data Repository for a database which is not currentlty supported (hyphothetical question - no need to ask about the database).
How is this possible and where can I have an example of that?
Short answer is "yes, definitely". One of the main Spring-data's intentions is to unify access to different data storage technologies under same API style. So you can implement spring-data adapter for any database as long as it is worth implementing a connector to that database in Java (which is definitely possible for the majority of databases).
Long answer would take several blog posts or even a small book :-) But let me just highlight couple of moments. Each of the existing spring-data modules expose one of (or both) the API flavors:
imperative - in a form of various template classes (e.g. RedisTemplate). It is mostly for the databases that don't have query language, but only a programmatic API. So you're just wrapping your db's API into template class and you're done.
declarative - in a form of so called Declarative Repositories, quite sophisticated mechanism of matching annotations on method signatures or method signatures themselves to a db's native queries. Luckily spring-data-commons module provides a lot of scaffolding and common infrastructure code for this, so you just need to fill the gaps for your specific data storage mechanism. You can look at slide deck from my conference talk, where I explained on a high level the mechanics of how particular spring-data module generates real implementations of repositories based on user declarations. Or you can just go into any of the existing modules and look into source code. The most interesting parts there are usually RepositoryFactory and QueryLookupStrategy implementations.
That is extremely simplified view of the spring-data concepts. In order to get more detailed information and explanations of core principles, I'd suggest reading spring-data-commons reference documentation and having a look at spring-data-keyvalue project, which is a good starting point to implement Spring Data Module for key-value storages.

MVC / Repository Pattern - Architecture

I have a project in which I am using NHibernate and ASP.Net MVC. The application is intended to allow users to track certain data and then produce views of statistics based upon the data entered. The structure of my application thus far looks something like this:
NHibernate Layer: Contains Repository<T> and UnitOfWork classes, as well as entity mapping definitions.
Core/Service Layer: Contains generic EntityService class. At the moment, this simply defines transaction scope via IUnitOfWork and interfaces with IRepository to provide higher-level data access services.
Presentation Layer (MVC Application): Not yet implemented, but contains the usual stuff plus dependency injection.
I have a couple of questions:
Is it poor design to allow my MVC application to handle dependency injection for ALL layers? For example, as well as dependency injection of EntityService instances into controllers, it will handle the dependency injection of IRepository into the EntityService classes. Should the service layer handle this itself, even though this would mean performing dependency injection in two distinct places?
Where should I produce my statistics? This business logic doesn't seem to belong in my service layer, which, at present, only contains entity type definitions and an interface for modifying and accessing entity properties. I have a few thoughts on this, but I'm not sure which I like best:
Keep my service layer as is and create a separate Statistics project - this is completely independent of the entity types for which it will be used, meaning my MVC controllers will have to pass raw numerical information between my business entities and my (presumably static) statistics classes. This is quite a neat separation but potentially means a lot of business logic still remaining in the presentation layer.
Create a Statistics project; however, create a tight coupling between the classes in this project and my business entities. For example, instead of passing a Reading object's values into a method, I will pass the entire object (or define them as extension methods). This will shift business logic out of my MVC app but the tight coupling seems a bit messy.
Keep all of my business logic inside my service layer. Define strongly-typed subclasses of EntityService, so my services contain both entity-specific business methods and data storage methods, while keeping the entity classes themselves as pure data containers. Create a separate Statistics project for any generic statistical processing and call its methods via my derived service classes. My service classes effectively merge business functions with the storage functionality provided created by IRepository<T>.
I am erring toward the third option but does anyone have any thoughts? Alternative suggestions?
Thanks in advance!
Preliminary observation:
I like the way in which you described your project, I just didn't get why your Data Access Layer (DAL) is called NHibernate Layer: it is odd with all the rest in which you didn't use technology name to describe a logical layer (correctly). So I suggest you to rename it DAL, and use it to abstract your app from NHibernate.
My opinions about your questions:
Absolutely no. It is good to apply Dependency Injection to All Layers. A couple or reasons for which it is good:
1.1 Testing: you can mock DAL interfaces and do unit test Service Layer w/o DAL using another DI config file. In the same way you can mock Service for Web Controllers layer and so on.
1.2 Different DAL implementations: suppose you need different DAL implementation (NOSQL, SQL or LINQ instead of NHibernate, etc..) technologies for different deployment of you project or to scale in the future. You can do that easily maintaining different DI config files.
You can have the same layer deployed in different projects. In the same way you can have a project containing different layers. I think their relation is orthogonal: project is describing a physical (development time and run time) implementation. Layers are logical. So initially I would keep it simple with the third option.
I just don't understand why you saying the following regarding this option:
Create a separate Statistics project for any generic statistical
processing and call its methods via my derived service classes. My
service classes effectively merge business functions with the storage
functionality provided created by IRepository.
I see Statistics as one or more services so you can implement it as namespace with classes inside your Service Layer. And, as any other service, you can inject DAL Repository classes. And, as any other Service/DAL, the Model classes can be shared between different Services and DAL classes.
StatsService.AverageReadingFor(Person p, DateTime start, DateTime end) sounds good.
There are several implementation options:
Using underlying repository features (for example: SQL avg function)
Using Observer Pattern which is implementable also using Dependency Injection
Using Aspect Oriented Programming. See that Spring.Net chapter as an example.
If you have more than one Service Layer instance (more than one server) than 2 and 3 must be adapted for out of process communication using a messaging system.
Just an update - Regarding my second question, I have decided to define an IStatsService<T> which expects an IEntityService<T> to be passed into its constructor. I'll use this for generic statistical processing of business entities and create further interfaces that implement IStatsService<T> where I need more type-specific information.
Hopefully this will help someone who has been scratching their head about a similar problem!

Sharing a database between two ASP.NET MVC 3 applications on Azure

(I had a hard time titling the question so feel free to suggest edits)
Here's the situation: we have just started building a system which is comprised of two integrated MVC 3 web applications running on Azure with a shared AzureSQL database. There are many reasons for running two apps instead of one and I'd rather not get into that...
Originally, database was created code-first from the MVC application "A". 75% of entities from all created will be relevant to application "B" plus application "B" will need a few entities specific to it.
Currently, the entities-defining classes have been extracted into a class library so within the application "A" solution to allow for reuse in application "B". But I am still unsure how to go about adding entities required for application "B"...
The question is: what is the best way to manage the database development/management in this situation? Specifically, where should the definition of entities be? Should we just have a separate db project defining the database and work db-first? (with this option being my preferred at this stage).
Since both of the devs (me and the other dev) working on this are new to MVC and EF, any advice would be much appreciated.
Without seeing what you have its not entirely mapping here in my brain - but I think I may have an idea on this.
Can you create an additional projects containing your models (data access layer) that has your entity framework edmx (or code first) and poco templates installed. This project will be shared by both applications - ie both projects get this assembly and both have the ef connect string in their web.configs.
Another approach is to put all code first into a single project (whatever.domain, whatever.models) etc. Your mapping code then goes into your DataAccess project
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Remove();
modelBuilder.Configurations.Add(new CustomerMap());
...
}
You now have shared poco classes and a single data access layer.
Some treat their poco classes as their domain objects (theres no problem with this) and their business logic goes in the poco classes. This is fine as long as your poco objects themselves remain persistent ignorant and ideally you don't want to reference implementation specific components in your poco classes. For a good writeup here see:
POCO - if POCO means pure .net class with only properties, where i can write validations in MVC
Personally I like db first and then reverse engineer it using the EF power tools to have a code first model as if you ever want to integration test it, you can simply create the db for your integration tests and remove it when done.

How do I Integrate Entity Framework with External REST Data Source?

I am creating my first ASP.NET MVC 3 application, and my data comes from a data source I can access only via its REST API.
I will only be using READ-ONLY access at this point to the REST data source (no updating, etc.)
I would like to use the Entity Framework V4 to provide a Business Entity interface to MVC 3 without exposing it to the REST API.
I need to get something working quickly - so I don't have time to fully understand the Server Layer / UnitOfWork and Repository patterns just yet, although I plan to go there next.
I am willing to use a Repository class at this time, but not ready for DI / IoC container yet.
Any suggestions on where the RESP API calls go?
EDIT
Learned by asking this question that it is not necessarily useful to integrate an ORM with a REST API - See my accepted answer below.
An Object/Relational Mapper, or ORM, like Entity Framework has specifically been developed to abstract away a relational database. It might not be the right fit for REST calls.
You could instead build a repository class that encapsulates the REST call and exposes methods like IEnumerable<T> GetAll() or T GetyById(...).

Resources