Environment for high availability and scalability of the application - websphere-liberty

A system administrator needs to set a new Liberty profile environment to support an application.
What should the administrator do to enable this environment for high availability and scalability of
the application?
A.
Define multiple server members in one collective controller.
B.
Define multiple servers in a cluster in one collective controller.
C.
Define multiple collective controllers within a Liberty collective.
D.
Define multiple server members in multiple collective controllers.

“A Liberty server cluster is comprised of two or more Liberty profiles configured into a server cluster within a Liberty collective.”
The correct answer should be Define multiple servers in a cluster within a Liberty collective…but that answer doesn’t exist so it’s between B and C for me…
A collective member can be configured with multiple collective controller endpoints. A collective member only communicates with one collective controller at a time; however, a configuration with more than one collective controller endpoint provides failover and workload balancing.
It should B

Related

Running multiple instances of same springboot application

I've a spring-boot application which i want to deploy on OVH public cloud.
I need t achieve the goal of deploying multiple instances of the same application, and each instance has to have its own resources (such as MySQL database).
Each instance has to be accessed with a special url. For example:
The first instance is accessable from http://domainname/instance1/index.html
The second instance is accessable from http://domainname/instance2/index.html
I'm really new to everything which concerns cloud computing and deployments.
From what i read on the internet, my doubt is to
Use Docker where each instance has to be running inside its own container (to have the resources separated for each instance)
Use Kubernetes to achieve the goal of having each instance accessable from a specific url.
Am i wrong ? any online courses / resources / videos which can help would be awsome.
Thanks in advance.
Basically, Docker is a platform to develop, deploy, and run applications inside containers, therefore containers represent run-time environment for images. Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
There are some essential concepts in Kubernetes that describe a cluster core components and application workload, thus define a desired state of the cluster.
Kubernetes objects represent abstraction level of cluster management operations and containerized applications run-time environment within associated resources in Kubernetes API.
I would focus on the Kubernetes resources that are most crucial in application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
Ingress exposes Kubernetes service outside the cluster with some exclusive benefits like load balancing, SSL/TLS ceritficate termination, etc.
You can get more relevant information about Kubernetes implementation in OVH within particular guide chapter.
Ideally, if it's a single application it should connect to one backend database, not 3 different databases.
If your use case is very specific and you really want to connect 3 instances of an application to 3 different databases then consider each deployed application as an independent application with 3 different deployments.
Talking about Docker and kubernenets, I don't feel you need these initially rather deploy your application directly to the cloud instances. To achieve the high availability of the application, deploy them as a part of autosacing group and map an ELB to each autoscaling group. Finally, map the ELB CNAME in your DNS record and start using your application.
Docker and K8s come with there own learning curve and adds overhead if you are new to this area. Though they have a lot of pros and are extremely beneficial if you have a lot of microservices to manage and have an agile environment.
My preference starts with VM first and then slowly move to the container world. :)

Starting with MassTransit

I'm already using RabbitMQ as queue 'buffer' and as messaging bus but I'm considering moving to MassTransit to make it more easy to use.
We run in a multi-tenant environment, and to isolate our tenants we have created a dedicated vhost for each tenant plus a "common" vhost for non-tenant related messages.
I would like to know if there's a Best Practice for multi-tenancy with MassTransit and if it is possible to reproduce the same schema (1 vhost per tenant) with MassTransit.
Can I create multiple instance of IBusControl (one per tenant linked to a dedicated IRabbitMqHost) in the same process ?
Yes, MassTransit allows the creation of as many bus instances as you need, and you could create on per vhost without any issues. Just make sure your RabbitMQ server is configured to allow enough connections/sessions to support the total number of tenants, queues, and exchanges.

Shared objects in a clustered application with Liberty Profile cluster on BlueMix

I am developing an application that needs to be clustered in Liberty Profile on BlueMix. I need to have a shared List of objects accessible to all nodes of the cluster. The app will perform update, add and remove operations on them, as one node does not cope with the big load need of the application. How can I do this with Liberty Profile? Is there a best practice or recommended approach before looking for 3rd party solutions for this? Thanks
I would suggest looking at the services in Bluemix, for example Data Cache or Redis. An external (to your app) service would be the cloud best practice for sharing data between multiple instances of the same application or multiple applications that need to communicate.
A traditional Liberty cluster doesn't make sense in Bluemix because the Cloud Foundry platform upon which it is based is already providing ways to achieve high availability and scale.

Service Fabric multi-tenant

We are planning to use Azure Service Fabric for a data-oriented multi-tenant application. Typically 100+ customers each with 5 - 100 users.
Looking at the documentation, I concluded that the best approach is to use an Application instance for each customer, rather than trying to use Profiles to achieve multi-tenancy.
Is this the best way to go ?
An application instance for each customer is a good way to handle multi-tenant situations on a single cluster, yes. There are Service Fabric applications that do this today (Azure DB is a notable one).
Here are some things you get with this approach:
Each application instance gets its own process, which means you have process-level isolation per tenant.
Each application instance is composed of one or more services, which means you can use a "microservices" architectural style for the application.
Each application instance can be created with unique parameters, so you can have various setups for each tenant. For example, you can do things like offer higher availability to certain tenants by using higher replica set size settings for the services in their application instance, or you can offer higher data capacity by using a higher partition count setting for the services in their application instance.
These are generally good things for data-oriented multi-tenant situations but whether or not its the best way to go of course depends on your specific requirements.

Windows Azure Visual Studio Solution

My application contains 25 C# projects, these projects are divided into 5 solutions.
Now I want to migrate these projects to run under Windows Azure, I realized that I should create one solution that contains all my web roles and worker roles.
Is this the correct way to do so, or still I can divide my projects into several solution.
The Projects are as shown below:
One Web application.
5 Windows Services.
The others are all class libraries.
Great answers by others. Let me add a bit more about one vs. many hosted services: If the Web and Worker roles need to interact directly (e.g. via TCP connection from Web role instance to a specific worker role instance), then those roles really should be placed in the same hosted service. External to the deployment, your hosted service listeners (web, wcf, etc.) are accessed by IP+Port; you cannot access a specific instance unless you also enable Azure Connect (VPN).
If your Web and Worker roles interact via Azure Queues or Service Bus, then have the option of deploying them to separate hosted services and still have the ability to communicate between them.
The most important question is: How many of these 25 projects are actual WebSites/Web Applications or Windows Services, and how many of them are just Class Libraries.
For the Class Libraries, you do not have to convert anything.
Now for the Cloud project(s). You have to decide how many hosted services you will create. You can read my blog post to get familiar with terms like "Hosted Service", "Role", "Role Instance", if you need to.
Once you decided your cloud structure - the number of hosted services and roles per each service, you can create a new solution for each hosted service.
You can also decide that you can host multiple web sites into a single WebRole, which is totally supported and possible, since WebRoles run in full IIS environment since SDK 1.3. You read more about hosting multiple web sites in single web role here and here, and even use the Windows Azure Accelerator for Web Roles.
If you have multiple windows services, or a background worker processes, you can combine them into a single Worker Role, or define a worker role for each separate worker process should you desire separate elasticity for each process, or your worker require a lot of computing power and memory.
UPDATE with regards to question update:
So, the Web Application is clear - it goes to one Web Role. Now for the Windows Services. There are two main considerations that you have to answer in order to find whether to put them into a single or more worker roles:
Does any of your Windows Services require excessive resources (i.e. a lot of computing power, or
lot of RAM)?
Does any of your Windows Services require independent scale?
If the answer for any of the questions is "yes", then put this Windows Service in a single Worker Role. Put all the Windows Services that the answer for both questions is "no" in a single Worker Role. That means that you will scale all of them or none of them (by manipulating the number of instances).
As for Cloud Service (or the Hosted Service), it is up to you to decide whether to use a single cloud service to deploy all the roles (Web and Workers) or use one Hosted service to deploy the Web Role and another one to deploy the Worker Roles. There is absolutelly no difference from billing prospective. You will still have your Web Role and Worker Role(s), and you will be charged based on instances count and data traffic. And you can independently scale any role (change the number of instances for a particular role) regardless of its deployment (within the same hosted service, or another hosted service).
At the end I suggest that you have single solution per Hosted Service (Cloud Project). So if you decide to have the Web Role and Worker Roles into a single Hosted Service, than you will have a single solution. If you have two Hosted Services (Cloud Projects), you will have two solutions.
Hope this helps.
You are correct ! and all projects goes to under 1 hosted service if you create only one cloud project for all your webrole and worker role project
Still you can divide your projects into several solution and you have to create that much cloud project and hosted service on azure platform
You can do both.
You can keep your 5 separate solutions as they are. Then, create a new solution that contains all 25 projects.
Which solution you choose to contain your Cloud (ccproj) project(s) will depend on how you want to distribute your application.
Each CCPROJ corresponds to 1 hosted service. So you could put all of your webs and workers into a single hosted service. Or you could have each web role as a different hosted service, and all of your worker roles together on another hosted service. Or you could do a combination of these. A definitive answer would require more information about your application, but in VS, a project can belong to more than 1 solution.

Resources