I use directly a .jdl file in order to generate a microservice project.
I have gateway and three microservices.
My simplified jdl file:
application {
config {
baseName projectName,
reactive false, //I tried to enforce this, although I know the default value is already false
applicationType gateway,
packageName com.ritan.projectName.gateway,
authenticationType jwt,
prodDatabaseType postgresql,
clientFramework react,
buildTool maven,
serviceDiscoveryType eureka,
searchEngine elasticsearch,
testFrameworks [cypress],
serverPort 8085
}
entities
/* from first microservice */
A, B, C,
/* from the second microservice */
D, E,
/* from the third microservice */
F, G
}
application {
config {
baseName firstMicroservice,
reactive false,
applicationType microservice,
packageName com.ritan.projectName.firstMicroservice,
authenticationType jwt,
prodDatabaseType postgresql,
serviceDiscoveryType eureka,
searchEngine elasticsearch,
serverPort 8081
}
entities A, B, C
}
application {
config {
baseName secondMicroservice,
reactive false,
applicationType microservice,
packageName com.ritan.projectName.secondMicroservice,
authenticationType jwt,
prodDatabaseType postgresql,
serviceDiscoveryType eureka,
searchEngine elasticsearch,
serverPort 8082
}
entities D, E
}
application {
config {
baseName thirdMicroservice,
reactive false,
applicationType microservice,
packageName com.ritan.projectName.thirdMicroservice,
authenticationType jwt,
prodDatabaseType postgresql,
serviceDiscoveryType eureka,
searchEngine elasticsearch,
serverPort 8082
}
entities F, G
}
entity A {
/* some values */
}
entity B {
/* some values */
}
entity C {
/* some values */
}
entity D {
/* some values */
}
entity E {
/* some values */
}
entity F {
/* some values */
}
entity G {
/* some values */
}
/* Relaitonships */
/* I do not have entities that belong to several microservices. */
microesrvice A, B, C with firstMicroservice
microesrvice D, E with secondMicroservice
microesrvice F, G with thirdMicroservice
paginate A, B, C with pagination
paginate D, E with pagination
paginate F, G with infinite-scroll
service A, B, C, D, E, F, G with serviceClass
search A, B, C, D, E, F, G with elasticsearch
References:
JDL Options (reactive options is false default)
Other jdl projects I've been looking at.
And as I said in the title, the code generated is reactive, quite bizare, only the code in the gateway is reactive. Microservices are not reactive.
And one more question, should I have generated code in the gate for communication with the microservices? At the moment I only have a code for communicating with its own database (Jhipster users database).
And if I have to write the code, the connection should be WebClient or something else? As far as I know, RestTemplate can not be use since is blocking and the gate is reactive so non-blocking context.
Jhipster version:
jhipster --version
INFO! Using bundled JHipster
7.7.0
Related
I want to build a NestJS project where I will separate queries and mutations into two groups: Internal and Public. In the Internal GraphQL module, I will define path and resolvers without any restrictions, but for the Public, I want to define a GraphQL module with path and JWT Guard, which will look at the same resolvers but just specific mutations and queries.
I tried to do the following:
GraphQLModule.forRootAsync<ApolloDriverConfig>({
driver: ApolloDriver,
useClass: InternalGraphQLConfig,
}),
GraphQLModule.forRootAsync<ApolloDriverConfig>({
driver: ApolloDriver,
useClass: PublicGraphQLConfig,
}),
For protecting the public endpoint (the path is defined in the PublicGraphQLConfig by GqlOptionsFactory) I added middleware by NestMiddleware where I am checking req.originalUrl - the path from the PublicGraphQLConfig. If the URL is the public one, I am checking for the JWT otherwise is a free - internal URL.
But, I do not know how and where I can define the list of queries and mutations for the Public GraphQL model because I do not want to expose all of them.
As I can see in the documentation, this approach may be unavailable, and it is impossible to do it correctly like this. Maybe I have to use directives or something else, but I firmly believe someone has a similar/same challenge and will share an idea/solution with me.
Edit 1:
Here I will add more details about resolvers and GraphQL Module configuration.
One of my resolvers looks like the following:
...
#Resolver(DogEntity)
class DogResolver {
#Mutation(...)
async firstMutation(...) {
//
}
#Mutation(...)
async secondMutation(...) {
//
}
#Query(...)
async firstQuery(...) {
//
}
#Query(...)
async secondQuery(...) {
//
}
...
}
GraphQL Module configuration looks like this:
...
#Injectable()
export class PublicGraphQLConfig implements GqlOptionsFactory {
createGqlOptions(): Promise<ApolloDriverConfig> | ApolloDriverConfig {
return {
...
resolvers: { DogResolver, ...}
path: '/my/public/route/graphql'
};
}
}
The first: It would be amazing if I could add a "global" guardian on the GrapQL Module level with, for example guards parameter in the PublicGraphQLConfig. Because it is impossible, and adding any JWT validation in the context parameter no make sense, I have to add Middleware where I'm checking the path parameter from the GraphQL Module configuration.
The Middleware looks like this:
...
#Injectable()
export class RequestResponseLoggerMiddleware implements NestMiddleware {
use(req: Request, res: Response, next: NextFunction) {
// For public endpoint, all resolvers required JWT token with Admin flag
if (req.originalUrl === '/my/public/route/graphql') {
this.validateJWT(req); // Do "throw exception inside"
}
...
The second: It would be amazing to add specific Mutation and/or Query in the GraphQL Module configuration. With the resolvers parameter, I can add only complete resolvers, but not specific queries or mutations. With this, I will be able to access the specific queries and mutations from different Endpoints with/out authorization requests.
The field in the GrapQL Module configuration like the following will be amazing (but, as I can see, it does not exist)
...
return {
...
resolvers: {
DogResolver:firstMutation(),
DogResolver:firstQuery(),
...
},
path: '/my/public/route/graphql'
};
...
I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services
I am developing apis & microservices in nestJS,
this is my controller function
#Post()
#MessagePattern({ service: TRANSACTION_SERVICE, msg: 'create' })
create( #Body() createTransactionDto: TransactionDto_create ) : Promise<Transaction>{
return this.transactionsService.create(createTransactionDto)
}
when i call post api, dto validation works fine, but when i call this using microservice validation does not work and it passes to service without rejecting with error.
here is my DTO
import { IsEmail, IsNotEmpty, IsString } from 'class-validator';
export class TransactionDto_create{
#IsNotEmpty()
action: string;
// #IsString()
readonly rec_id : string;
#IsNotEmpty()
readonly data : Object;
extras : Object;
// readonly extras2 : Object;
}
when i call api without action parameter it shows error action required but when i call this from microservice using
const pattern = { service: TRANSACTION_SERVICE, msg: 'create' };
const data = {id: '5d1de5d787db5151903c80b9', extras:{'asdf':'dsf'}};
return this.client.send<number>(pattern, data)
it does not throw error and goes to service.
I have added globalpipe validation also.
app.useGlobalPipes(new ValidationPipe({
disableErrorMessages: false, // set true to hide detailed error message
whitelist: false, // set true to strip params which are not in DTO
transform: false // set true if you want DTO to convert params to DTO class by default its false
}));
how will it work for both api & microservice, because i need all at one place and with same functionality so that as per clients it can be called.
ValidationPipe throws HTTP BadRequestException, where as the proxy client expects RpcException.
#Catch(HttpException)
export class RpcValidationFilter implements ExceptionFilter {
catch(exception: HttpException, host: ArgumentsHost) {
return new RpcException(exception.getResponse())
}
}
#UseFilters(new RpcValidationFilter())
#MessagePattern('validate')
async validate(
#Payload(new ValidationPipe({ whitelist: true })) payload: SomeDTO,
) {
// payload validates to SomeDto
. . .
}
I'm going out on a limb and assuming in you main.ts you have the line app.useGlobalPipes(new ValidationPipe());. From the documentation
In the case of hybrid apps the useGlobalPipes() method doesn't set up pipes for gateways and micro services. For "standard" (non-hybrid) microservice apps, useGlobalPipes() does mount pipes globally.
You could instead bind the pipe globally from the AppModule, or you could use the #UsePipes() decorator on each route that will be needing validation via the ValidationPipe
More info on binding pipes here
As I understood, useGlobalPipes is working fine for api but not for microservice.
Reason behind this, nest microservice is a hybrid application and it has some restrictions. Please refer below para.
By default a hybrid application will not inherit global pipes, interceptors, guards and filters configured for the main (HTTP-based) application. To inherit these configuration properties from the main application, set the inheritAppConfig property in the second argument (an optional options object) of the connectMicroservice() call.
Please refer this Nest Official Document
So, you need to add inheritAppConfig option in connectMicroservice() method.
const microservice = app.connectMicroservice(
{
transport: Transport.TCP,
},
{ inheritAppConfig: true },
);
It worked for me!
I am new to spring 5.
1) How I can log the method params which are Mono and flux type without blocking them?
2) How to map Models at API layer to Business object at service layer using Map-struct?
Edit 1:
I have this imperative code which I am trying to convert into a reactive code. It has compilation issue at the moment due to introduction of Mono in the argument.
public Mono<UserContactsBO> getUserContacts(Mono<LoginBO> loginBOMono)
{
LOGGER.info("Get contact info for login: {}, and client: {}", loginId, clientId);
if (StringUtils.isAllEmpty(loginId, clientId)) {
LOGGER.error(ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getDescription());
throw new ServiceValidationException(
ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getErrorCode(),
ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getDescription());
}
if (!loginId.equals(clientId)) {
if (authorizationFeignClient.validateManagerClientAccess(new LoginDTO(loginId, clientId))) {
loginId = clientId;
} else {
LOGGER.error(ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getDescription());
throw new AuthorizationException(
ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getErrorCode(),
ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getDescription());
}
}
UserContactDetailEntity userContactDetail = userContactRepository.findByLoginId(loginId);
LOGGER.debug("contact info returned from DB{}", userContactDetail);
//mapstruct to map entity to BO
return contactMapper.userEntityToUserContactBo(userContactDetail);
}
You can try like this.
If you want to add logs you may use .map and add logs there. if filters are not passed it will return empty you can get it with swichifempty
loginBOMono.filter(loginBO -> !StringUtils.isAllEmpty(loginId, clientId))
.filter(loginBOMono1 -> loginBOMono.loginId.equals(clientId))
.filter(loginBOMono1 -> authorizationFeignClient.validateManagerClientAccess(new LoginDTO(loginId, clientId)))
.map(loginBOMono1 -> {
loginBOMono1.loginId = clientId;
return loginBOMono1;
})
.flatMap(o -> {
return userContactRepository.findByLoginId(o.loginId);
})
i try to trigger the websocket to push notification to client , but jhipster microservices doesn't support websocket in non-gateway service,
so i try to send the websocket message in a Gateway controller, and call this controller in other services with feignclient, but it seems doesn't work to call gateway controller in other service.
#PostMapping("/notify")
#Timed
public ResponseEntity<String> sendNotification(#RequestBody TodoDTO todoDTO) {
String dest = "/topic/notify/";
if (StringUtils.isNotEmpty(todoDTO.getTo())) {
dest += todoDTO.getTo();
} else if (StringUtils.isNotEmpty(todoDTO.getToShopId())) {
dest += todoDTO.getToShopId();
} else if (StringUtils.isNotEmpty(todoDTO.getToParentShopId())) {
dest += todoDTO.getToParentShopId();
}
messagingTemplate.convertAndSend(dest, todoDTO);
return ResponseEntity.ok("SUCCESS");
}
does anyone know the best practice when using websocket in jhipster microservices ?