I have licence for WebSphere Application Server (Base package) can that be used to create a Cluster environment - websphere

I have got WebSphere Application Server base version and I need to deploy application in horizontal cluster. Can that be achieved without Network Deployment version?

You cannot create and manage cluster on base version.
What you can do, is so called Simple load balancing. So you have 2 separate servers, where you have to manually, separately deploy same application and configure IBM Http sever to do the load balancing among these 2 servers.
So you have a failover and load balancing, but without centralized management.

Related

Application Scaling with HazelCast IMDG Open Source

I have a java application running on a single Instance EC2 server. I want to implement autoscaling in the server, but my java application will not work on active-active mode.
So, I have started looking at Hazelcast IMDG open source for Application Scaling Purpose.
I am new to Hazelcast. So, Can anyone give me the idea about How can I implement Hazelcast open source for my Application Scaling and what will be the steps?
You can deploy a Hazelcast cluster on EC2 with Auto Scaling turned on. See here for details: https://github.com/hazelcast/hazelcast-aws/blob/master/README.md

How to make sure there will be a fixed DB server across multiple deployments in Cloud Foundry?

I am a newbie with CF microservices and I am trying to deploy a service multiple times. As far as I understood, each time I deploy into a space the application is getting a different database server and schema. Is there a way to tell the Cloud Foundry to use only a fixed DB server all the times across multiple deployments in one environment?
The keyword for your case is 'Service Instance'
You can create a service instance of database server within the environment specific for your application and bind it via application manifest.
e.g.
cf create-service rabbitmq small-plan myapplication-rabbitmq-instance
As long as you have a binding to myapplication-rabbitmq-instance in your application manifest it would be preserved/be the same between application deployments within this space.
e.g. in your application manifest:
---
...
services:
- myapplication-rabbitmq-instance
More on https://docs.cloudfoundry.org/devguide/services/

How to run Spring Cloud Config server in Fault Tolerance mode?

In my project we have a requirement to run two instances of spring cloud config server so if one instance goes down, other will take care the config server responsibilities.
Currently, you would need to put config server behind a load balancer. It is stateless, so that wouldn't hurt. There is an open issue to configure multiple config server url's in the client, so it could do failover there.
If you are running multiple instances of the config server, you can have them all register themselves in Eureka, and maybe do a lookup to the config server with it's application name via Eureka in all the other microservices. This way, Zuul (and Ribbon) will take care of the load balancing.
Edit:
I guess spencergibb is right. It's best to use a load balancer, for eg: ELB, if you're going to deploy on AWS.
Consider multiple spring-cloud-config-uris for high availability

RESTful Microservice failover & load balancing

At the moment we have some monolithic Web Applications and try to transfer the projects to an microservices infrastructure.
For the monolithic application is an HAProxy and Session Replication to have failover and load balancing.
Now we build some RESTful microservices with spring boot but it's not clear for me what is the best way to build the production environment.
Of course we can run all applications as unix services and still have a reverse proxy for load balancing and failover. This solution seems very heavy for me and have a lot of configuration and maintenance. Resource Management and scaling up or down servers will be always a manually process.
What are the best possibilities to setup production environment with 2-3 Servers and easy resource management?
Is there some solution the also support continuous deployment?
I'd recommend looking into service discovery. Netflix descibes this as:
A Service Discovery system provides a mechanism for:
Services to register their availability
Locating a single instance of a particular service
Notifying when the instances of a service change
Packages such as Netflix's Eureka could be of help. (EDIT - actually this looks like it might be AWS specific)
This should work well with continuous delivery as the services can make themselves unavailable, be updated and then register availability again.

IBM Worklight - Can Worklight be deployed to an existing WAS server?

Can a Worklight Server be deployed to a WebSphere application server which also runs other non-Worklight .ear applications? Or does Worklight need its own separate instance of WAS?
Just like you can deploy multiple instances of Worklight (v6 and above) projects - multiple .war files to the same WAS application server, there should not be issues to deploy it to an application server running other services.
That said, possible issues to consider:
When deploying a Worklight project, you will want to enable "application security"
(in the WAS admin console, Security > Global Security). If there are some other web
applications for which application security is undesired, you need a different WAS server
instance.
Setting up, enabling and migrating security
The list of users that can use the web applications are configured through LDAP or
"federated repositories", or similar. If, for Worklight, you need to use a completely
different set of user logins than for the other web applications, then you need to use
multiple "security domains".
Configuring multiple security domains
The machine hosting the application server will probably need memory upgrades...
Deploying the Enterprise Archive (EAR) Using the WebSphere Admin Console
Probably also need to make clear seperation where required:
IBM WebSphere Developer Technical Journal: Co-hosting multiple versions of J2EE applications
Worklight is itself an application running inside a web container, whether that be Tomcat, WAS Liberty, or full WAS. It's essentially a layer running underneath the container to handle requests for Worklight applications, fielding their context root requests. If you create the WAR file for your Worklight app and extract out the deployment descriptor you'll find all the necessary filters and listeners that most other apps would have.
Things like adapters and wlapps are "installed" to this underlying layer, and are merely extracted and stored as whatever was packaged with them, such as the JS and CSS you used to make you app. In fact, with a standard Liberty install you can typically find your adapters in plain sight at (for the WL5.0.6 instance I have handy, it's different for WL6):
/opt/IBM/Worklight/server/wlp/usr/servers/worklightServer/worklight.home/worklight/data/export/adapters
So, in addition to what Idan has said, I also present you with the following docs (assuming WL6)
Overview of the Worklight Server installation process
Given my own experience, you should be perfectly able to install other EAR and WAR files to your existing WAS instance, just make sure your context roots are unique, as always ;)
I also second the memory considerations.

Resources