An api-gateway uses a ClientProxy to interact with a microservice (i.e. "service A")
import { ClientProxy } from "#nestjs/microservices"
#Injectable() export class AppService {
constructor(
#Inject("SERVICE_A") private readonly clientServiceA: ClientProxy
) { }
}
The microservice acts like a server, and bootstraps as follows:
async function bootstrap() {
const app = await NestFactory.createMicroservice(AppModule, {
transport: Transport.TCP,
options: {
host: "127.0.0.1",
port: 8888
}
});
await app.listen(() => logger.log("Microservice A is listening"));
}
bootstrap();
The API-gateway acts as a client, and uses a ClientsModule to make the connection to "service A". All that is done in the AppModule
import { ClientsModule, Transport } from "#nestjs/microservices";
#Module({
imports: [
ClientsModule.register([
{
name: "SERVICE_A",
transport: Transport.TCP,
options: {
host: "127.0.0.1",
port: 8888
}
}
])
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
I have all this from a great tutorial, which I found here, and I found the same setup in a book (published at Packt).
On a sidenote:
In all honesty, I would expect the API-gateway to also act as a discovery server. Then other microservices would connect to it. That allows multiple instances of each
microservice, and it provides an auto-discovery mechanism.
Microservices only make sense to me if they are loosely coupled. And I
want to migrate to a situation where I can spin up multiple instances
of a microservice and restart individual ones without downtime of the
system. By contrast, the above setup has no real discovery server. There could have been of the microservices connected to the API-gateway. Then the API-gateway would also take the role of a discovery server. But that's clearly not the case. So right now, it's all tightly coupled. That sideinfo may be relevant, but I don't want to overload the question. So, let's not get carried away. Perhaps I am missing something, so I wanted to put this in just as a sidenote.
My real question is, does the above setup allow bi-directional communication. e.g. what if the microservice (service a) wants to make calls to the api-gateway. In other words, is there a ServerProxy ? Or can I use ClientProxy on both ends of the communication ? Or is the only way around that to make 2 connections ?
API gateways are usually designed to control which services/endpoints get exposed to external clients (e.g. SPAs, mobile apps, or customers), and how they should be exposed (e.g. what are the routes, and how should clients authenticate). You should generally avoid putting any business logic into these gateways other than authentication/authorization needed for these external clients to identify themselves.
However, it's not unreasonable to have some of your microservices be dependent on features provided by each other. If you need one microservice to connect to another, you can follow the same pattern you used in the gateway to register the upstream microservice as a client of the downstream service – skipping the gateway altogether for internal service communication. I'd caution you to avoid bi-directional logic between these services as you could quickly lose track of the application flow and create circular references, but it is technically possible to have two microservices be clients of each other.
Related
I have researched this topic online and have found nearly-similar questions to this - But, I need to know why in NestJS we have to use two packages to implement WebSocket communication.
The two packages are,
#nestjs/websockets
#nestjs/platform-socket.io
I understand that WebSocket is the protocol and Socket.IO is a library which has both server and client versions of it.
In the gateway file of NestJS when implementing a WebSocket connection, one has to write code similar to below.
import {
ConnectedSocket,
MessageBody,
OnGatewayConnection,
OnGatewayDisconnect,
SubscribeMessage,
WebSocketGateway,
WebSocketServer,
} from '#nestjs/websockets';
import { Server } from 'socket.io';
My questions,
What is the difference between WebSocketServer and Server here?
Why do we import Server from socket.io and not #nestjs/platform-socket.io?
How do you describe the purpose of using each of these packages in a single sentence?
#nestjs/websockets is the base package that makes websocket integration possible in NestJS. #nestjs/platform-socket.io is the specific package for socket.io integration, rather than something like #nestjs/platform-ws which is for the ws package.
WebsocketServer is the decorator to tell Nest to inject the websocket server, Server is the socket.io type for the server.
We import Socket from socket.io because #nestjs/platform-socket.io is really just for the websocket adapter that plugs into Nest's platform.
Single sentences:
#nestjs/websockets: allows for websocket communication via a websocket adapter
#nestjs/platform-socket.io: socket.io websocket adapter to allow for socket.io websocket communication with the server
socket.io: a websocket implementation and engine that is usable with and without NestJS
I am doing a PoC on "write-behind cache" using Hazelcast.
Let's say I have two services/microservices:
"HZServer" (running on ports 9091, 9092, 9093). I have included the below dependencies in this service:
'com.hazelcast:hazelcast-all: 4.0.3'
'org.springframework.boot:spring-boot-starter-data-jpa'
I have implemented MapStore in this service and connected to PostgreSQL using CRUDRepository. only HZServer will be communicating with the database.
I have configured this as a Hazelcast server. Also, if my understanding is correct, Hazelcast is running as an embedded server here.
Defined a MapConfig named "Country" with its MapStoreConfig implementation 'CountryMapStore'.
"MyClient" (running on ports 8081, 8082, 8083.... ). I have included the below dependencies in this service:
'com.hazelcast:hazelcast-all: 4.0.3' (I could have used just hazelcast-client).
I have configured it as a Hazelcast client using "Hazelcast-client.yaml". I also have some RestControllers defined in MyClient service. So, MyClient service will be communicating with the HZServer (Cache) only, and not the DB. I am fetching the "Country" map from the HZInstance in the below manner:
IMap<String, Country> iMap = hazelcastInstance.getMap("Country");
Fetching and Putting the key value pairs in the below manner:
Country country = iMap.get(code); // Fetching
iMap.put(code, country); // Inserting or Updating
Please suggest me if this is the only way of achieving "Write-Behind" cache in Hazelcast?
Please find the architecture diagram below:
Very detailed context, this is great!
True "Write-behind" means the interactions between Hazelcast server and the database are asynchronous. Thus, it depends on the exact configuration of the MapStore.
Note that in that case, you may lose data. Again, this depends on your specific implementation (e.g. you may retry until the transaction has been acknowledged).
I've been reading through the nameko docs, and it is all good and clear, except for one part.
How do you actually deploy your nameko microservice?
I mean, it is clear how we deploy RESTful APIs in flask_restful, for instance. But with nameko?
If two microservices should communicate, how do we move them into the "listening" state?
I am not sure I understand your problem.
For each nameko service you define AMQP_URI constant that point to your RabbitMQ instance.
If each of your services have the same AMQP_URI, it make possible communication through sending rpc calls (where you have a queue per service endpoint) or using pub/sub messaging because service use the same RabbitMQ instance.
You can also have HTTP REST API. You must define endpoint in nameko service with http decorator (see example here: https://nameko.readthedocs.io/en/stable/built_in_extensions.html). In your confguration you have to define PORT for you web server, e.g. port 8000: WEB_SERVER_ADDRESS: 0.0.0.0:8000. And make this port accessible for the World.
I'm using Spring Boot for microservices, and I came accross and issue with load balancing.
Spring Actuator adds special health and metrics endpoint to the apps; with this, some basic information can be acquired from the running instances.
What I would like to do, is to a create a (reverse)proxy (e.g. with Zuul and/or Ribbon?), which creates a centralized load balancer, that selects instances by their health status.
For example, I have the following microservices
client
proxy (<- I would like to implement this)
server 1
server 2
When the client sends an http request to the proxy, the proxy should be able to decide, which of the to server instances has the least load, and forward request to that one.
Is there an easy way to do this?
Thanks,
krisy
If you want to make a choice on various load-data, you could implement custom HealthIndicators that accumulate some kind of 'load over time' data, use this in your load balancer to decide where to send traffic.
All custom health indicators will be picked up by spring-boot, and invoked on the actuator /health endpoint.
#Component
class LoadIndicator implements HealthIndicator {
#Override
Health health() {
def loadData = ... do stuff to gather whatever load
return Health.up()
.withDetail("load", loadData)
.build();
}
}
Perhaps you could already use some of spring-boots metrics already, there's multiple endpoints in the actuator. /beans, /trace, /metrics. Should be possible to find that data in your application too.
I have a bunch of web services servers (around 200) running on the same machine which expose the same service on different ports.
I have a client which perform tasks which include calling the service on different servers.
Something like:
while (true) {
task = readTask();
runHelloService(task.serverAddress)
}
I was wondering what is the best way to generate the HelloService client proxy.
Can I generate one and replace the target address before each call?
Should i generate a client per server (which means 200 client proxies) and use the relevant one?
I will probably want to run the above loop concurrently on several threads.
Currently I have only one proxy which is generated by spring and cxf with the jaxws:client declaration.
This is an interesting use case. I believe that changing the endpoint whilst sharing the proxy amongst multiple threads will not work. There is a one-to-one relationship between a client proxy and a conduit definition. Changes to a conduit are explicitly not thread safe.
I recommend eschewing Spring configuration altogether to create client proxies and instead use programmatic construction of the 200 client proxies.
See also Custom CXF Transport - Simplified Client Workflow.