I have a Spring boot RESTful service endpoint. this service should be available on port:8080 always. but when too many HTTP requests arrive, I want it to be scaled up automatically. and also scale down if number of requests falls down. for scaling up/down, I have no problem, because I can use Spring Cloud Eureka + Jenkins. but the problem is that, they create service instances with different port numbers (obviously). but I need somehow to mask the whole scaling up thing from clients. because they should only use the port 8080. so I am confused, how I can load balance the requests on port 8080 to my multiple instances, which are running on other different ports. appreciate if you can help me.
Use zuul routing to load balance your instances.
Related
I've been reading through the nameko docs, and it is all good and clear, except for one part.
How do you actually deploy your nameko microservice?
I mean, it is clear how we deploy RESTful APIs in flask_restful, for instance. But with nameko?
If two microservices should communicate, how do we move them into the "listening" state?
I am not sure I understand your problem.
For each nameko service you define AMQP_URI constant that point to your RabbitMQ instance.
If each of your services have the same AMQP_URI, it make possible communication through sending rpc calls (where you have a queue per service endpoint) or using pub/sub messaging because service use the same RabbitMQ instance.
You can also have HTTP REST API. You must define endpoint in nameko service with http decorator (see example here: https://nameko.readthedocs.io/en/stable/built_in_extensions.html). In your confguration you have to define PORT for you web server, e.g. port 8000: WEB_SERVER_ADDRESS: 0.0.0.0:8000. And make this port accessible for the World.
I've started building a microservice application with the netflix stack, and have been successful in registering clients with the eureka discovery server.
I want to have two instances of each client service,
and i'm wondering what happens if one instance of a client goes down. Does loadbalancing handle such situations ? If yes, then isn't eureka also acting as a failover system ?
We have written simple message sending mechanism to client (logged in user based) from server by using spring boot + websocket.
Currently its running in a single server, which is working fine.
But our production servers running under load balancing environment.
How could we achieve where the messages are pushed from server nodes send to appropriate users.
Please advice the possibilities, I have read some articles about RabbitMQ with socketjs , but not clear will it work for load balancing.
Thanks
If you have multiple instances of your websocket server, then every instance needs to know the sessions that exists on other instances.
Therefore you need to use a broker relay (not the in-memory broker given by spring) and set the UserRegistryBroadcast property.
You can find some info related to this at the end of this talk https://www.youtube.com/watch?v=nxakp15CACY
I was able to register the single nodejs app instance using the netflix sidecar app successfully. Both nodejs and sidecar bridge app are running in Cloud foundry.
Result:
SAMPLE-NODEJS n/a (1) (1) UP (1)
When i scale the nodeJS app to 3 instances, could not see the scaled instances in Eureka service registry. It still shows 1 instance.
Can some one help me to do this....
I want to register all the instances of Nodejs app with Eureka service registry with Sidecar bridge app.
Pls.. help.
Regards
Purandhar
Sidecar, like the eureka java client is built to register only one application with the eureka server at a time. It is not a eureka proxy for multiple applications. I built a proof of concept proxy that will do what you want.
This happens because it's not your node application, which is registering to eureka, but your sidecar, which still runs in one instance.
simple solution
you scale your sidecars with your node apps. This is quite straight forward, in particular when using container based deployment. You just can craft a docker container starting both, a node instance and a sidecar.
load balancing
you can extend your sidecar application to load balance traffic to your sidecars. Then your node apps will still be shown as a single instance, but still have load balancing to scaled node instances
In my project we have a requirement to run two instances of spring cloud config server so if one instance goes down, other will take care the config server responsibilities.
Currently, you would need to put config server behind a load balancer. It is stateless, so that wouldn't hurt. There is an open issue to configure multiple config server url's in the client, so it could do failover there.
If you are running multiple instances of the config server, you can have them all register themselves in Eureka, and maybe do a lookup to the config server with it's application name via Eureka in all the other microservices. This way, Zuul (and Ribbon) will take care of the load balancing.
Edit:
I guess spencergibb is right. It's best to use a load balancer, for eg: ELB, if you're going to deploy on AWS.
Consider multiple spring-cloud-config-uris for high availability