load balancer in Spring boot, ignores failing servers - spring-boot

I have a load balancer in Spring boot, but I want that when one of the servers fails, it ignores it and uses another
so for example if the second localhost fails, i will like it to try to use another, without the user notice, that there is a server down, ¿what can i do?
so i am usign this
#Override
public Flux<List<ServiceInstance>> get() {
return Flux.just(Arrays
.asList(
new DefaultServiceInstance(serviceId + "1", serviceId, "localhost", 8080, false),
new DefaultServiceInstance(serviceId + "2", serviceId, "localhost", 9092, false),
new DefaultServiceInstance(serviceId + "3", serviceId, "localhost", 9999, false)));
}
}
To tell my code all the posibles servers, but when one fails, it shows an error, and i will like that it tries another server, instead of showing HTTP response 500, so the user will not notice ther was an error in the first place

Related

UnknownHostException when trying to connect using websocket

I have a use case where I need to send 2 requests to the server. The output of first request is used in second request so the calls have to be synchronous. I am using ktor (OkHttp)client websocket for this. I am failing at first attempt to even connect to the server with this error
Exception in thread "main" java.net.UnknownHostException: https: nodename nor servname provided, or not known
I suspect I haven't split my url properly and thats why its not able to connect to host.
Couple of qns
Is there any benefit to using websocket instead of using 2 separate Http requests?
Is there a way I can just pass URL to the websocket request?
Best and easiest way to get response and send another request?
I have been able to find very limited documentation on ktor client websocket.
const val HOST = "https://sample.com"
const val PATH1 = "/path/to/config?val1=<val1>&val2=<val2>"
const val PATH2 = "/path/to/config?val=<response_from_first_req>"
fun useSocket() {
val client = HttpClient() {
install(WebSockets)
}
runBlocking {
client.webSocket(method = HttpMethod.Get, host = HOST, path = PATH1) {
val othersMessage = incoming.receive() as? Frame.Text
println(othersMessage?.readText())
println("Testing")
}
}
client.close()
}
Thanks in advance.

Using SocketIo Manager with a default URL

My goal is to add a token in the socketio reconnection from the client (works fine on the first connection, but the query is null on the reconnection, if the server restarted while the client stayed on).
The documentation indicates I need to use the Manager to customize the reconnection behavior (and add a query parameter).
However, I'm getting trouble finding how to use this Manager: I can't find a way to connect to the server.
What I was using without Manager (works fine):
this.socket = io({
query: {
token: 'abc',
}
});
Version with the Manager:
const manager = new Manager(window.location, {
hostname: "localhost",
path: "/socket.io",
port: "8080",
query: {
auth: "123"
}
});
So I tried many approaches (nothing, '', 'http://localhost:8080', 'http://localhost:8080/socket.io', adding those lines to the options:
hostname: "localhost",
path: "/socket.io",
port: "8080" in the options,
But I couldn't connect.
The documentation indicates the default URL is:
url (String) (defaults to window.location)
For some reasons, using window.location as URL refreshes the page infinitely, no matter if I enter it as URL in the io() creator or in the new Manager.
I am using socket.io-client 3.0.3.
Could someone explain me what I'm doing wrong ?
Thanks
Updating to 3.0.4 solved the initial problem, which was to be able to send the token in the initial query.
I also found this code in the doc, which solves the problem:
this.socket.on('reconnect_attempt', () => {
socket.io.opts.query = {
token: 'fgh'
}
});
However, it doesn't solve the problem of the Manager that just doesn't work. I feel like it should be removed from the doc. I illustrated the problem in this repo:
https://github.com/Yvanovitch/socket.io/blob/master/examples/chat/public/main.js

Examples of integrating moleculer-io with moleculer-web using moleculer-runner instead of ServiceBroker?

I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services

Service Fabric https endpoint with kestrel and reverse proxy

I've been trying to setup Https on a stateless API endpoint following the instructions on the microsoft documentations and diverse post/blogs I could find. It works fine locally, but I'm struggling to make it work after deploying it on my dev server getting
Browser : HTTP ERROR 504
Vm event viewer : HandlerAsyncOperation EndProcessReverseProxyRequest failed with FABRIC_E_TIMEOUT
SF event table : Error while processing request: request url = https://mydomain:19081/appname/servicename/api/healthcheck/ping, verb = GET, remote (client) address = xxx, request processing start time = 2018-03-13T14:50:17.1396031Z, forward url = https://0.0.0.0:44338/api/healthcheck/ping, number of successful resolve attempts = 48, error = 2147949567, message = , phase = ResolveServicePartition
in code I have in the instancelistener
.UseKestrel(options =>
{
options.Listen(IPAddress.Any, 44338, listenOptions =>
{
listenOptions.UseHttps(GetCertificate());
});
})
servicemanifest
<Endpoint Protocol="https" Name="SslServiceEndpoint" Type="Input" Port="44338" />
startup
services.AddMvc(options =>
{
options.SslPort = 44338;
options.Filters.Add(new RequireHttpsAttribute());
});
+
var options = new RewriteOptions().AddRedirectToHttps(StatusCodes.Status301MovedPermanently, 44338);
app.UseRewriter(options);
here is what I got in azure (deployed through ARM template)
Health probes
NAME PROTOCOL PORT USED BY
AppPortProbe TCP 44338 AppPortLBRule
FabricGatewayProbe TCP 19000 LBRule
FabricHttpGatewayProbe TCP 19080 LBHttpRule
SFReverseProxyProbe TCP 19081 LBSFReverseProxyRule
Load balancing rules
NAME LOAD BALANCING RULE BACKEND POOL HEALTH PROBE
AppPortLBRule AppPortLBRule (TCP/44338) LoadBalancerBEAddressPool AppPortProbe
LBHttpRule LBHttpRule (TCP/19080) LoadBalancerBEAddressPool FabricHttpGatewayProbe
LBRule LBRule (TCP/19000) LoadBalancerBEAddressPool FabricGatewayProbe
LBSFReverseProxyRule LBSFReverseProxyRule (TCP/19081) LoadBalancerBEAddressPool SFReverseProxyProbe
I have a Cluster certificate, ReverseProxy Certificate, and auth to the api through azure ad and in ARM
"fabricSettings": [
{
"parameters": [
{
"name": "ClusterProtectionLevel",
"value": "[parameters('clusterProtectionLevel')]"
}
],
"name": "Security"
},
{
"name": "ApplicationGateway/Http",
"parameters": [
{
"name": "ApplicationCertificateValidationPolicy",
"value": "None"
}
]
}
],
Not sure what else could be relevant, if you have any ideas/suggestions, those are really welcome
Edit : code for GetCertificate()
private X509Certificate2 GetCertificate()
{
var certificateBundle = Task.Run(async () => await GetKeyVaultClient()
.GetCertificateAsync(Environment.GetEnvironmentVariable("KeyVaultCertifIdentifier")));
var certificate = new X509Certificate2();
certificate.Import(certificateBundle.Result.Cer);
return certificate;
}
private KeyVaultClient GetKeyVaultClient() => new KeyVaultClient(async (authority, resource, scope) =>
{
var context = new AuthenticationContext(authority, TokenCache.DefaultShared);
var clientCred = new ClientCredential(Environment.GetEnvironmentVariable("KeyVaultClientId"),
Environment.GetEnvironmentVariable("KeyVaultSecret"));
var authResult = await context.AcquireTokenAsync(resource, clientCred);
return authResult.AccessToken;
});
Digging into your code I've realized that there is nothing wrong with it except one thing. I mean, as you use Kestrel, you don't need to set up anything extra in the AppManifest as those things are for Http.Sys implementation. You don't even need to have an endpoint in the ServiceManifest(although recommended) as all these things are about URL reservation for the service account and SSL binding configuration, neither of which is required with Kestrel.
What you do need to do is to use IPAddress.IPv6Any while you configure SSL. Aside the fact that it turns out to be the recommended way which allows you to accept both IPv4 and IPV6 connections, it also does a 'correct' endpoint registration in the SF. See, when you use IPAddress.Any, you'll get the SF setting up an endpoint like https://0.0.0.0:44338, and that's how the reverse proxy will try to reach the service which obviously wouldn't work. 0.0.0.0 doesn't correspond to any particular ip, it's just the way to say 'any IPv4 address at all'. While when you use IPAddress.IPv6Any, you'll get a correct endpoint mapped to the vm ip address that could be resolved from within the vnet. You could see that stuff by yourself in the SF Explorer if you go down to the endpoint section in the service instance blade.

Akka HTTP not allowing incoming connections from remote hosts on macOS

I wanted to try out the following small example:
object Webserver {
def main(args: Array[String]) {
implicit val system = ActorSystem("my-system")
implicit val materializer = ActorMaterializer()
// needed for the future flatMap/onComplete in the end
implicit val executionContext = system.dispatcher
val route =
path("hello") {
get {
redirect(Uri("https://google.com"), StatusCodes.PermanentRedirect)
}
}
val bindingFuture = Http().bindAndHandle(route, "localhost", 8080)
println(s"Server online at http://localhost:8080/\nPress RETURN to stop...")
StdIn.readLine() // let it run until user presses return
bindingFuture
.flatMap(_.unbind()) // trigger unbinding from the port
.onComplete(_ => system.terminate()) // and shutdown when done
}
}
This works perfectly when accessing from the same host on macOS. However, when I am accessing the host remotely, I can't access the akka webserver.
I have checked my Firewall options and I verified that the program java allows incoming connections.
One more suspicious thing: When I run python -m SimpleHTTPServer 8080, I get the following window:
I don't get this window when starting my akka application. Do I have to implement custom logic to ask for permission or something?
To enable remote access to your server, you need to bind your server to the external interface. To simply bind to all interfaces, you can set the host/IP to 0.0.0.0, like:
Http().bindAndHandle(route, "0.0.0.0", 8080)

Resources