I want to know that can we have multiple service discovery instance.If Yes than have service call have been resolved.
Which service discovery tool or aproach are you refering to?
DNS - can be distrbuted by nature is is "eventually consistent" according to the CAP theorem
Consul as a popular service discovery solution also supports cluster setup and they take care of master elections etc..
Maybe you can be more specific with your question?
Related
I've a spring-boot application which i want to deploy on OVH public cloud.
I need t achieve the goal of deploying multiple instances of the same application, and each instance has to have its own resources (such as MySQL database).
Each instance has to be accessed with a special url. For example:
The first instance is accessable from http://domainname/instance1/index.html
The second instance is accessable from http://domainname/instance2/index.html
I'm really new to everything which concerns cloud computing and deployments.
From what i read on the internet, my doubt is to
Use Docker where each instance has to be running inside its own container (to have the resources separated for each instance)
Use Kubernetes to achieve the goal of having each instance accessable from a specific url.
Am i wrong ? any online courses / resources / videos which can help would be awsome.
Thanks in advance.
Basically, Docker is a platform to develop, deploy, and run applications inside containers, therefore containers represent run-time environment for images. Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
There are some essential concepts in Kubernetes that describe a cluster core components and application workload, thus define a desired state of the cluster.
Kubernetes objects represent abstraction level of cluster management operations and containerized applications run-time environment within associated resources in Kubernetes API.
I would focus on the Kubernetes resources that are most crucial in application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
Ingress exposes Kubernetes service outside the cluster with some exclusive benefits like load balancing, SSL/TLS ceritficate termination, etc.
You can get more relevant information about Kubernetes implementation in OVH within particular guide chapter.
Ideally, if it's a single application it should connect to one backend database, not 3 different databases.
If your use case is very specific and you really want to connect 3 instances of an application to 3 different databases then consider each deployed application as an independent application with 3 different deployments.
Talking about Docker and kubernenets, I don't feel you need these initially rather deploy your application directly to the cloud instances. To achieve the high availability of the application, deploy them as a part of autosacing group and map an ELB to each autoscaling group. Finally, map the ELB CNAME in your DNS record and start using your application.
Docker and K8s come with there own learning curve and adds overhead if you are new to this area. Though they have a lot of pros and are extremely beneficial if you have a lot of microservices to manage and have an agile environment.
My preference starts with VM first and then slowly move to the container world. :)
I found out that the task queue is being primarily used for App Engine standard environment. I am migrating our existing services from App Engine to Kubernetes. What would be a good alternative for task queue? Push queue is the one which is being currently used.
I read documentation online as well as gone through this link: When to use PubSub vs Task Queues
But there is no clear answer as to whether Pub/Sub is a good alternative on Kubernetes.
Edit:
My current use case is that a service performs similar tasks for a set of ID's and some task which takes some time to complete so the queue would take this task and process it while the service can perform other things in parallel. While Pub/Sub is mainly needed where we have publisher and subscriber here the service itself has some tasks which it needs to keep processing in parallel!
I would think Cloud Pub/Sub is a great tool for message queues. It's orthogonal to how you deploy/run your services, whether with Kubernetes or something else.
There's a lot of relevant documentation for using pubsub with Kubernetes on GCP, like this page.
Chris Richardson mentioned in his article "3rd-party-registration":
"The 3rd party registrar might only have superficial knowledge of the state of the service instance, e.g. RUNNING or NOT RUNNING and so might not know whether it can handle requests."
But what this really means? What information does a microservice send to the registrar when it starts? Why is the registrar not able to know information about the service and its location?
"The 3rd party registrar might only have superficial knowledge of the state of the service instance, e.g. RUNNING or NOT RUNNING and so might not know whether it can handle requests."
What information does a micro service send to the registrar when it starts? Why the registrar is not able to know information about the service and its location ?
The service will typically not contact the registry by itself. The pattern that has emerged is rather that an orchestration system starts up the service and makes sure the service is registered and its status is checked. This is helpful so you don't have to worry about these things when you design your service - The service should have a pure business focus and not have any knowledge about service discovery mechanisms. And the registry will of course need to know about the service and its location(s). Because it's part of the orchestration system it provides this information to the rest of the service cluster.
Then about the quote: It refers to the fact that the registrar is a separate entity and there is a need for communication between the registry and the service. The scope of communication is usually confined to the purpose of service readiness and availability (e.g. through a health probe). However it is not uncommon that systems with a service registry allow custom health probes for your own service types. Since those are in your control, you can define the exact communication and what APIs and return values make your service healthy or not.
Why is this very basic information about the service status sufficient?
The status information is what is required to divert traffic to healthy services when a service fails and / or automatically replace unhealthy service containers. These are the typical use cases and thus supported out of the box by a typical registration or orchestration system.
I'm new to using Consul as a service discovery for the SOA. Please clarify whether I can enhance the monitoring web UI that Consul has inbuild to support my needs? Like I wish to monitor the Microservice health (CPU usage, Latency, and Disk space). TIA
Through consul builtin UI there is no feature of getting docker metric like cpu, memory .
Instead you can use tools like
https://github.com/google/cadvisor
i'm using WSO2 ELB 210 and WSO2 ESB 481 to make a cluster. All the configuration is made following the official documentation guide and it works fine.
Now i need to write a client able to dynamically list the ESB cluster members in order to monitor the situation inside my cluster.
I tried to write an hazelcast client but i m not able to connect to the cluster at all.
What kind of road should i have to follow? Are there some API or services i can use?
thanks
Maybe this will be useful to someone else in future:
The WSO2 Elastic Load Balancer has been discontinued according to July 1st 2015.
Thanks
Shadsha