Is there a way to do multi-tenant Swagger? - spring-boot

I'm working on a Spring Boot application that needs to be migrated to being multi-tenant. One of the biggest changes is the concept of core vs extension models. The core model contains a data structure common to all tenants, then they can each extend the core model with additional fields that meet their specific business needs.
Pretty much everything is good to go with this, but I'm getting tripped up by Swagger. It's a corporate mandate that all our services have good Swaggers, and to be blunt I'm just having a very hard time figuring out a solution.
First, in order to handle the dynamic nature of the payload structure, we're not using Spring's automatic serialization of POJOs to JSON. We're doing it manually, in order to handle separating/combining the core and extension models. This means our controller methods pretty much look like this:
public String method(HttpServletRequest request) {
// Do stuff
}
Right away, that cripples a lot of how SpringFox Swagger auto-magically identifies the API structure.
But then there's the fact that the request/response bodies for these methods vary. For different tenants, the structure will be slightly different. The APIs themselves are unchanged, the core model is always the same, but different sets of extension data are in play.
Are there any SpringFox/Swagger features that may help with this?

Could you use a combination of POJOs with an embedded Map<String, Object> for tenant specific extensions? You could then document the core model fields.
e.g.
#ApiModel(value = "MyPoJo", description = "My POJO")
#Builder
#JsonIgnoreProperties(ignoreUnknown = true)
#Value
public class MyPoJo {
#ApiModelProperty(value = "My string field", required = true)
String string;
#ApiModelProperty(value = "My integer field")
int integer;
#ApiModelProperty(value = "Tenant specific custom attributes")
Map<String, Object> customAttributes;
}

Related

SpringBoot build query dynamically

I'm using SpringBoot 2.3.1 and Spring Data for accessing to PostgreSQL. I have the following simple controller:
#RestController
public class OrgsApiImpl implements OrgsApi {
#Autowired
Orgs repository;
#Override
public ResponseEntity<List<OrgEntity>> listOrgs(#Valid Optional<Integer> pageLimit,
#Valid Optional<String> pageCursor, #Valid Optional<List<String>> domainId,
#Valid Optional<List<String>> userId) {
List<OrgEntity> orgs;
if (domainId.isPresent() && userId.isPresent()) {
orgs = repository.findAllByDomainIdInAndUserIdIn(domainId.get(), userId.get());
} else if (domainId.isPresent) {
orgs = repository.findAllByDomainIdIn(domainId.get());
} else if (userId.isPresent()) {
orgs = repository.findAllByUserIdIn(userId.get());
} else {
orgs = findAll();
}
return ResponseEntity.ok(orgs);
}
}
And a simple JPA repository:
public interface Orgs extends JpaRepository<OrgEntity, String> {
List<OrgEntity> findAllByDomainIdIn(List<String> domainIds);
List<OrgEntity> findAllByUserIdIn(List<String> userIds);
List<OrgEntity> findAllByDomainIdInAndUserIdIn(List<String> domainIds, List<String> userIds);
}
The code above has several obvious issues:
If number of query parameters will grow, then this if is growing very fast and too hard to maintain it. Question: Is there any way to build query with dynamic number of parameters?
This code doesn't contain a mechanism to support cursor. Question: Is there any tool in Spring Data to support query based on cursor?
The second question can be easily get read if first question is answered.
Thank you in advance!
tl;dr
It's all in the reference documentation.
Details
Spring Data modules pretty broadly support Querydsl to build dynamic queries as documented in the reference documentation. For Spring Data JPA in particular, there's also support for Specifications on top of the JPA Criteria API. For simple permutations, query by example might be an option, too.
As for the second question, Spring Data repositories support streaming over results. That said, assuming you'd like to do this for performance reasons, JPA might not be the best fit in the first place, as it'll still keep processed items around due to its entity lifecycle model. If it's just about access subsets of the results page by page or slice by slice, that's supported, too.
For even more efficient streaming over large data sets, it's advisable to resort to plain SQL either via jOOQ (which can be used with any Spring Data module supporting relational databases), Spring Data JDBC or even Spring Data R2DBC if reactive programming is an option.
You can use spring-dynamic-jpa library to write a query template
The query template will be built into different query strings before execution depending on your parameters when you invoke the method.

Should marshal/demarshaling be in service (transactional) methods or controller in spring boot project?

I am working on spring boot application with CRUD api with input and output as json object. Is it okay to include json->POJO and POJO->json logic in service method? (service method is marked with transactional annotation)
//Controller
public Map<String, String> getPersonNames(){
return personSvc.getNames();
}
//Service method
#Transactional(readonly = true)
public Map<String, String> getNames(){
return populateNames(repo.findAll());
}
private Map<String, String> populateNames(final List<Person> personList) {
return ImmutableMap.of(
//Populate names into map
);
}
Well, it mostly depends on application you are building.
Based on the information you provided (almost no information) I can only speak in general, but there is a Domain Driven Design (DDD), which is quite common for Spring applications. You can find more info in answers to this question
This kind of design separates your core domain logic from logic that your technological stack forces you to have. Briefly speaking, it keeps domain models (object that you work with) in the depth of your application.
Next, it wraps the core with application layer (where the logic, that relies on domain models, lays). Application layer only knows how to process underlying models.
And the last wrapper is (port) adapter layer. It adapts your logic for specific technology. It can be, for example, external API or wrapper for MongoDB (while application layer declares only an interface for collecting documents, this layer adapts (implements) it for concrete technology). It can also provide a marshalling/demarshalling.
Maybe example can explain it better:
Domain model is a document (an article) that your service works with.
Application layer knows how to process them (collect, order, filter articles), but knows nothing about JSON serialization.
Resource (aka port adapter) knows how to serialize collection of articles into JSON and back, but it's the only thing it does. No logic here
And you may have seen how every layer knows only about it's underlying layers. An article does not know anything, it's just a model. Application knows how to process articles. And adapter knows how to adapt processing results for concrete technology, JSON for instance.
So i would suggest you to provide basic validation (not against domain/application layer logic) and marshalling/demarshalling process at the highest level, at the resource (your #RestController's endpoints, for instance) since JSON is just a way to adapt your domain for external connections

Neo4j Spring data POC for social RESTful layer

Starting to work on a new project... RESTful layer providing services for social network platform.
Neo4j was my obvious choice for main data store, I had the chance to work with Neo before but without exploiting Spring Data abilities to map POJO to node which seems very convenient.
Goals:
The layer should provide support resemble to Facebook Graph API, which defines for each entity/object related properties & connections which can be refer from the URL. FB Graph API
If possible I want to avoid transfer objects which will be serialized to/from domain entities and use my domain pojo's as the JSON's transferred to/from the client.
Examples:
HTTP GET /profile/{id}/?fields=...&connections=... the response will be Profile object contains the requested in the URL.
HTTP GET /profile/{id}/stories/?fields=..&connections=...&page=..&sort=... the response will be list of Story objects according to the requested.
Relevant Versions:
Spring Framework 3.1.2
Spring Data Neo4j 2.1.0.RC3
Spring Data Mongodb 1.1.0.RC1
AspectJ 1.6.12
Jackson 1.8.5
To make it simple we have Profile,Story nodes and Role relationship between them.
public abstract class GraphEntity {
#GraphId
protected Long id;
}
Profile Node
#NodeEntity
#Configurable
public class Profile extends GraphEntity {
// Profile fields
private String firstName;
private String lastName;
// Profile connections
#RelatedTo(type = "FOLLOW", direction = Direction.OUTGOING)
private Set<Profile> followThem;
#RelatedTo(type = "BOOKMARK", direction = Direction.OUTGOING)
private Set<Story> bookmarks;
#Query("START profile=node({self}) match profile-[r:ROLE]->story where r.role = FOUNDER and story.status = PUBLIC")
private Iterable<Story> published;
}
Story Node
#NodeEntity
#Configurable
public class Story extends GraphEntity {
// Story fields
private String title;
private StoryStatusEnum status = StoryStatusEnum.PRIVATE;
// Story connections
#RelatedToVia(type = "ROLE", elementClass = Role.class, direction = Direction.INCOMING)
private Set<Role> roles;
}
Role Relationship
#RelationshipEntity(type = "ROLE")
public class Role extends GraphEntity {
#StartNode
private Profile profile;
#EndNode
private Story story;
private StoryRoleEnum role;
}
At first I didn't use AspectJ support, but I find it very useful for my use-case cause it is generating a divider between the POJO to the actual node therefore I can request easily properties/connections according to the requests and the Domain Driven Design Approach seems very nice.
Question 1 - AspectJ:
Let's say I want to define default fields for an object, these fields will be returned to the client whether if requested in the URL or not...so I have tried #FETCH annotation on these fields but it seems it is not working when using AspectJ.
At the moment I do it that way..
public Profile(Node n) {
setPersistentState(n);
this.id = getId();
this.firstName = getFirstName();
this.lastName = getLastName();
}
Is it the right approach to achieve that? does the #FETCH annotation should be supported even when using AspectJ? I will be happy to get examples/blogs talking about AspectJ + Neo4j didn't find almost anything....
Question 2 - Pagination:
I would like to support pagination when requesting for specific connection for example
/profile/{id}/stories/ , if stories related as below
// inside profile node
#RelatedTo(type = "BOOKMARK", direction = Direction.OUTGOING)
private Set<Story> bookmarks;
/profile/{id}/stories/ ,if stories related as below
// inside profile node
#Query("START profile=node({self}) match profile-[r:ROLE]->story where r.role = FOUNDER and story.status = PUBLIC")
private Iterable<Story> published;
Is pagination is supported out of the box with either #Query || #RelatedTo || #RelatedToVia using Pageable interface to retrieve Page instead of Set/List/Iterable? the limit and the sorting should be dynamic depending on the request from the client... I can achieve that using Cypher Query DSL but prefer to use the basic.. other approaches will be accepted happily.
Question 3 - #Query with {self}:
Kind of silly question but I can't help it :), it seems that when using #Query inside the node entity ( using {self} parameter } the return type must be Iterable which make sense..
lets take the example of...
// inside profile node
#Query("START profile=node({self}) match profile-[r:ROLE]->story where r.role = FOUNDER and story.status = PUBLIC")
private Iterable<Story> published;
When published connection is requested:
// retrieving the context profile
Profile profile = profileRepo.findOne(id);
// getting the publishe stories using AspectJ - will redirect to the backed node
Iterable<Story> published = profile.getPublished();
// set the result into the domain object - will throw exception of read only because the type is Iterable
profile.setPublished(published);
Is there a workaround for that? which is not creating another property which will be #Transiant inside Profile..
Question 4 - Recursive relations:
I am having some problems with transitive / recursive relations, when assigning new Profile Role in Story the relation entity role contain #EndNode story , which contain roles connection...and one of them is the context role above and it is never end :)...
Is there a way to configure the spring data engine not to create these never ending relations?
Question 5 - Transactions:
Maybe I should have mentioned it before but I am using the REST server for the Neo4j DB, from previous reading I understand that there is not support out-of-the-box in transactions? like when using the Embedded server
I have the following code...
Profile newProfile = new Profile();
newProfile.getFollowThem().add(otherProfile);
newProfile.getBookmarks().add(otherStory);
newProfile.persist(); // or profileRepo.save(newProfile)
will this run in transaction when using REST server? there are few operations here, if one fail all fail?
Question 6 - Mongo + Neo4j:
I need to store data which don't have relational nature.. like Feeds, Comments , Massages. I thought about an integration with MongoDB to store these.. can I split domain pojo fields/connections to both mongo/neo4j with cross-store support? will it support AspectJ?
That is it for now.... any comments regarding any approach I presented above will be welcome.. thank you.
Starting to answer, by no means complete:
Perhaps upgrade to the the .RELEASE versions?
Question 1
If you want to serialize AspectJ entities to JSON you have to exclude the internal fields generated by the advanced mapping (see this forum discussion).
When you use the Advanced Mapping #Fetch is not necessary as the data is read-through from the database anyway.
Question 2
For the pagination for fields, you can try to use a cypher-query with #Query and LIMIT 100 SKIP 10 as a fixed parameter. Otherwise you could employ a repository/template to actually fill a Collection in a field of your entity with the paged information.
Question 3
I don't think that the return-type of an #Query has to be an Iterable it should also work with other types (Collections or concrete types). What is the issue you run into?
For creating recursive relationships - try to store the relationship-objects themselves first and only then the node-entities. Or use template.createRelationshipBetween(start, end, type, allowDuplicates) for creating the relationships.
Question 5
As you are using SDN over REST it might not perform very well, as right now the underlying implementation uses the RestGraphDatabase for fine-grained operations and the advanced mapping uses very fine grained calls. Is there any reason why you don't want to use the embedded mode? Against a REST server I would most certainly use the simple-mapping and try to handle read operations mostly with cypher.
With the REST APi there is only one tx per http-call the only option of having larger transactions is to use the rest-batch-api.
There is a pseudo-transaction support in the underlying rest-graph-database which batches calls issued within a "transaction" to be executed in one batch-rest-request. But those calls must not rely on read-results during the tx, those will only be populated after the tx has finished. There were also some issues using this approach with SDN so I disabled it for that (it is a config-option/system-property for the rest-graphdb).
Question 6
Right now cross-store support for both MongoDB and Neo4j is just used against a JPA / relational store. We discussed having cross-store references between the spring-data projects once but didn't follow up on this.

Is it sometimes okay to use service locator pattern in a domain class?

This question may be more appropriate for the Programmers stack. If so, I will move it. However I think I may get more answers here.
So far, all interface dependencies in my domain are resolved using DI from the executing assembly, which for now, is a .NET MVC3 project (+ Unity IoC container). However I've run across a scenario where I think service locator may be a better choice.
There is an entity in the domain that stores (caches) content from a URL. Specifically, it stores SAML2 EntityDescriptor XML from a metadata URL. I have an interface IConsumeHttp with a single method:
public interface IConsumeHttp
{
string Get(string url);
}
The current implementation uses the static WebRequest class in System.Net:
public class WebRequestHttpConsumer : IConsumeHttp
{
public string Get(string url)
{
string content = null;
var request = WebRequest.Create(url);
var response = request.GetResponse();
var stream = response.GetResponseStream();
if (stream != null)
{
var reader = new StreamReader(stream);
content = reader.ReadToEnd();
reader.Close();
stream.Close();
}
response.Close();
return content;
}
}
The entity which caches the XML content exists as a non-root in a much larger entity aggregate. For the rest of the aggregate, I am implementing a somewhat large Facade pattern, which is the public endpoint for the MVC controllers. I could inject the IConsumeHttp dependency in the facade constructor like so:
public AnAggregateFacade(IDataContext dataContext, IConsumeHttp httpClient)
{
...
The issue I see with this is that only one method in the facade has a dependency on this interface, so it seems silly to inject it for the whole facade. Object creation of the WebRequestHttpConsumer class shouldn't add a lot of overhead, but the domain is unaware of this.
I am instead considering moving all of the caching logic for the entity out into a separate static factory class. Still, the code will depend on IConsumeHttp. So I'm thinking of using a static service locator within the static factory method to resolve IConsumeHttp, but only when the cached XML needs to be initialized or refreshed.
My question: Is this a bad idea? It does seem to me that it should be the domain's responsibility to make sure the XML metadata is appropriately cached. The domain does this periodically as part of other related operations (such as getting metadata for SAML Authn requests & responses, updating the SAML EntityID or Metadata URL, etc). Or am I just worrying about it too much?
It does seem to me that it should be the domain's responsibility to
make sure the XML metadata is appropriately cached
I'm not sure about that, unless your domain is really about metadata manipulation, http requests and so on. For a "normal" application with a non-technical domain, I'd rather deal with caching concerns in the Infrastructure/Technical Services layer.
The issue I see with this is that only one method in the facade has a
dependency on this interface, so it seems silly to inject it for the
whole facade
Obviously, Facades usually don't lend themselves very well to constructor injection since they naturally tend to point to many dependencies. You could consider other types of injection or, as you pointed out, using a locator. But what I'd personnaly do is ask myself if a Facade is really appropriate and consider using finer-grained objects instead of the same large interface in all of my controllers. This would allow for more modularity and ad-hoc injection rather than inflating a massive object upfront.
But that may just be because I'm not a big Facade fan ;)
In your comment to #ian31, you mention "It seems like making the controller ensure the domain has the correct XML is too granular, giving the client too much responsibility". For this reason, I'd prefer the controller asks its service/repository (which can implement the caching layer) for the correct & current XML. To me, this responsibility is a lot to ask of the domain entity.
However, if you're OK with the responsibilities you've outlined, and you mention the object creation isn't much overhead, I think leaving the IConsumeHttp in the entity is fine.
Sticking with this responsibility, another approach could be to move this interface down into a child entity. If this was possible for your case, at least the dependency is confined to the scenario that requires it.

Spring MVC 3 - Binding an 'immutable' object to a form

I have several thoroughly unit-tested and finely crafted rich DDD model classes, with final immutable invariants and integrity checks. Object's instantiation happens through adequate constructors, static factory methods and even via Builders.
Now, I have to provide a Spring MVC form to create new instances of some classes.
It seems to me (I'm not an expert) that I have to provide empty constructor and attribute's setters for all form's backing classes I want to bind.
So, what should I do ?
Create anemic objects dedicated to form backing and transfer the informations to my domain model (so much for the DRY principle...) calling the appropriate methods / builder ?
Or is there a mecanisms that I missed that can save my day ? :)
Thank you in advance for your wisdom !
The objects that are used for binding with the presentation layers are normally called view models and they are DTOs purposed toward displaying data mapped from domain objects and then mapping user input back to domain objects. View models typically look very similar to the domain objects they represent however there are some important differences:
Data from the domain objects may be flattened or otherwise transformed to fit the requirements of a given view. Having the mapping be in plain objects is easier to manage than mappings in the presentation framework, such as MVC. It is easier to debug and detect errors.
A given view may require data from multiple domain objects - there may not be a single domain object that fits requirements of a view. A view model can be populated by multiple domain objects.
A view model is normally designed with a specific presentation framework in mind and as such may utilize framework specific attributes for binding and client side validation. As you stated, a typical requirement is for a parameterless constructor, which is fine for a view model. Again, it is much easier to test and manage a view model than some sort of complex mapping mechanism.
View models appear to violate the DRY principle, however after a closer look the responsibility of the view model is different, so with the single responsibility principle in mind it is appropriate to have two classes. Also, take a look at this article discussing the fallacy of reuse often lead by the DRY principle.
Furthermore, view models are indeed anemic, though they may have a constructor accepting a domain object as a parameter and a method for creating and updating a domain object using the values in the view model as input. From experience I find that it is a good practice to create a view model class for every domain entity that is going to be rendered by the presentation layer. It is easier to manage the double class hierarchy of domain objects and view models than it is to manage complex mapping mechanisms.
Note also, there are libraries that attempt to simplify the mapping between view models and domain objects, for example AutoMapper for the .NET Framework.
Yes you will need to create Objects for the form to take all the input, and the update the your model with this objects in one operation.
But I wont call this objects anemic (especially if you do DDD). This objects represent one unit of work. So this are Domain Concepts too!
I solved this by creating a DTO Interface:
public interface DTO<T> {
T getDomainObject();
void loadFromDomainObject(T domainObject);
}
public class PersonDTO implements DTO<Person> {
private String firstName;
private String lastName;
public PersonDTO() {
super();
}
// setters, getters ...
#Override
public Person getDomainObject() {
return new Person(firstName, lastName);
}
#Override
public void loadFromDomainObject(Person person) {
this.firstName = person.getFirstName();
this.lastName = person.getLastName();
}
// validation methods, view formatting methods, etc
}
This also stops view validation and formatting stuff from leaking into the domain model. I really dislike having Spring specific (or other framework specific) annotations (#Value, etc) and javax.validation annotations in my domain objects.

Resources