Using of Spring Security in Cloud & autoscaling of web application - spring

Some Web Application is managed by Spring 4.0 Framework, the Spring Security 3.2 is used also to authenticate users with remember Me feature.
The remember Me and Security is realized by JDBC Support (the needed data are saved in database).
A lot of Spring Beans is used, that are created as "spring" singletons
This Web Applicaton runs in TOMCAT7 Servlet Container, that installed in "classic" Host Sever.
This web application will be runs in production within TOMCAT7, that managed by some Cloud Provider - either in AWS Elastic Beanstalk or in EC2 Instance direct with instaled TOMCAT with autoscaling
That means, that at the first moment runs only ONE EC2 Instance, that has running TOMCAT Server 1. This server has initialized Spring Beans, holding in JVM 1.
But at "peak time" the second instance of EC2 will be started. The TOMCAT2 Server will be started also.
Is it possible, that a USER1, that was authenticated at first moment on TOMCAT1, have a problem with authentication and other business operations realized in WEb Application, if the load balancer routes the user1'Requests after start of TOMMCAT2 to TOMCAT2???
i don't know, whether Spring 4.0 or Spring Security are stateless by default.

You can consider using Elastic Load Balancer sticky sessions for your application.
Read about using Sticky Sesions here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_StickySessions.html
Read about using Sticky Sessions with Elastic Beanstalk here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.elb.html
Read about Stickiness policy here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html

Related

how to enable cache across spring boot app instances of same service?

We would like to enable caching in spring boot app.
I have followed and enabled it using following link:
https://howtodoinjava.com/spring-boot2/spring-boot-cache-example/
when I ran the spring boot app as multiple instances (on different ports like 8081, 8082, ...),
its creating each cache for each instance whenever its turn came.
how to enable cache across spring boot app instances ?
You can't. Each time you run the application, new server starts which maintains its own memory cache.
If you want a common cache, install a separate server, such as Redis, in your system and configure spring app with that. Then, it can be commonly accessed for all instances.

Configuration or link required to connect cluster of Pivotal Coud Cache in Spring boot microservices

I am setting up the Spring-boot microservices with the cluster bi-direction Pivotal cloud cache.
I have set up the bi-directional cluster in Pivotal Cloud, I have a list of locators with ports.
I have already some online docs.
https://github.com/pivotal-cf/PCC-Sample-App-PizzaStore
But couldn't understand the on which configuration the spring boot app will know to connect.
I am looking for some tutorial or some reference where I can have spring boot app linked up with the PCC(gemfire)
The way you configure a app running in PCF (Pivotal Cloud Foundry) to talk to a PCC (Pivotal Cloud Cache) service instance is by binding the app to that service instance. You can bind it either by running the cf bind command or by adding the service name in the app`s manifest.yml, something like the below
path: build/libs/cloudcache-pizza-store-1.0.0-SNAPSHOT.jar
services:
- dev-service-instance
I hope you are using Spring Boot for Apache Geode & Pivotal GemFire (SBDG) in your app, if not I recommend you to use it as it makes connecting to PCC service instance extremely easy. SBDG has the logic to extract credentials, hostname:ports needed to connect to a service instance.
You as a app developer just need to
Create the service instance.
Bind your app to the service instance.
The boilerplate code for configuring credentials, hostnames, ips are handled by SBDG.
When you deploy an application in Cloud Foundry, (or Pivotal Cloud), you need to bind it to one or more services. Service details are then automatically exposed to the app via the VCAP_SERVICES environment variable. In the case of PCC this will include the name and port of the locator. By adding the spring-geode-starter (or spring-gemfire-starter) jar to the application it will automatically process the VCAP_SERVICES value and extract the necessary endpoint information in order to connect to the cluster.
Furthermore, if security is enabled on your PCC instance, you will also need to have created a service key. As with the locator details, the necessary credentials will be exposed via VCAP_SERVICES and the starter jar will automatically process and configure them.

Web server to multiple ejb server call using tomee

I am using TomEE server and i want to deploy my ejb application to multiple instances and want to access it using a web application.
I want to add a load balancer between web application and ejb application.
How can i achieve this.
I already have a load balancer for web application multiple instances using mod_jk,but i need this configuration somewhere in INITIALCONTEXT properties file.
Attaching a pic of how i want to build my app architecture.architecture pic
I'm struggling in this from quite some time. Any help will be appreciable.
We can use TOMEE multipoint discovery feature using failover in initial context PROVIDER URL and give multiple URL.
multipoint.properties file we add in conf.

how monolith spring 3 application will communicate with microservice?

I have one monolith spring web application developed using spring 3.1 and spring-security 3.1 with Java 7 and it is deployed on tomcat 7.
Now I have a new requirement where I have to create a micro-service for a new module using spring boot with java 8. This micro-service will be deployed separately on different EC2 instance.
I am looking for suggestion/idea to access new microservice from my existing spring web application.
How to perform inter process communication within these two spring application?
Can someone provide me any help/pointer?
You can make use of service discovery pattern, which are mainly of two kinds -
Client-side discovery - This is where clients are responsible for figuring out available service instances. Example - Netflix OSS.
Server-side discovery - In this the service instances are registered on the server-side using a service registry. Example - AWS ELB.
You can read a lot about these on the internet. Just remember the keywords.
Hope this helps !

Discovering Hazelcast instance in a Spring Boot eco-system

Background:
We have a set of about 15 Spring Boot applications as microservices. They all run as Docker containers, and run as clusters of one or more instances. We also use Spring Cloud Netflix components such as Eureka and discover the running application instances from the client using Feign/Ribbon.
Question:
As part of the POC exercises, we tested with Redis and Hazelcast for caching and Spring Boot configuration storage. Everything works using Spring Boot, Spring Cloud and Redis/Hazelcast Java client libraries. However, when we deploy Hazelcast in a multi-node peer-to-peer cluster, Hazelcast seems to require a "known" IP address/hostname and the accessible port to be available in the Java client's configuration (with or without Spring). Typically, when Hazelcast is deployed in a multi-instance cluster on ephemeral VM instances (for example, EC2), the IP address and the port information is not available. So we thought of two possible solutions:
Find a way to run Hazelcast as a Spring Boot application, and register it with Eureka as a Discovery Client. That way other Spring Boot applications can use Eureka to discover an instance of Hazelcast dynamically
Find a way in Hazelcast so that it can publish it's IP address and port information dynamically to a central Key/Value store
If anyone has played around with Hazelcast to be able to do either/both of the possible solutions, it would be great if you can share more information on that. If there is a third approach that'd work better, I will be eager to know that as well.

Resources