Hibernate session closed for no apparent reason - spring

I've been struggling with this for days and i really don't know what's happening here.
I have a few processes that are triggered by a user and happen on a seperate thread, which does some operations, and updates an entry in the DB about it's progress which the user can retrieve.
this has all been working fine until recently, when suddenly, sometimes, seemingly uncorrelated with anything, these processes fail on their first attempt to lazy load an entity. They fail with one of a few different errors, all of which eventually stem from the hibernate seesion being closed somehow:
org.hibernate.SessionException: Session is closed. The read-only/modifiable setting is only accessible when the proxy is associated with an open session.
-
org.hibernate.SessionException: Session is closed!
-
org.hibernate.LazyInitializationException: could not initialize proxy - the owning Session was closed
-
org.hibernate.exception.GenericJDBCException: Could not read entity state from ResultSet : EntityKey[com.rdthree.plenty.domain.crops.CropType#1]
-
java.lang.NullPointerException
at com.mysql.jdbc.ResultSetImpl.checkColumnBounds(ResultSetImpl.java:766)
i'm using spring #Transactional to manage my transactions
here's my config:
#Bean
public javax.sql.DataSource dataSource() {
// update TTL so that the datasource will pick up DB failover - new IP
java.security.Security.setProperty("networkaddress.cache.ttl", "30");
HikariDataSource ds = new HikariDataSource();
String jdbcUrl = "jdbc:mysql://" + plentyConfig.getHostname() + ":" + plentyConfig.getDbport() + "/"
+ plentyConfig.getDbname() + "?useSSL=false";
ds.setJdbcUrl(jdbcUrl);
ds.setUsername(plentyConfig.getUsername());
ds.setPassword(plentyConfig.getPassword());
ds.setConnectionTimeout(120000);
ds.setMaximumPoolSize(20);
return ds;
}
#Bean
public JpaTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager manager = new JpaTransactionManager();
manager.setEntityManagerFactory(entityManagerFactory);
return manager;
}
#Bean
#Autowired
public LocalContainerEntityManagerFactoryBean entityManagerFactory(javax.sql.DataSource dataSource) {
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
factory.setDataSource(dataSource);
factory.setPackagesToScan("com.rdthree.plenty.domain");
Properties properties = new Properties();
properties.put("org.hibernate.flushMode", "ALWAYS");
properties.put("hibernate.cache.use_second_level_cache", "true");
properties.put("hibernate.cache.use_query_cache", "true");
properties.put("hibernate.cache.region.factory_class",
"com.rdthree.plenty.config.PlentyInfinispanRegionFactory");
properties.put("hibernate.cache.infinispan.statistics", "true");
properties.put("hibernate.cache.infinispan.query", "distributed-query");
properties.put("hibernate.enable_lazy_load_no_trans", "true");
if (plentyConfig.getProfile().equals(PlentyConfig.UNIT_TEST)
|| plentyConfig.getProfile().equals(PlentyConfig.PRODUCTION_INIT)) {
properties.put("hibernate.cache.infinispan.cfg", "infinispan-local.xml");
} else {
properties.put("hibernate.cache.infinispan.cfg", "infinispan.xml");
}
factory.setJpaProperties(properties);
HibernateJpaVendorAdapter adapter = new HibernateJpaVendorAdapter();
adapter.setShowSql(false);
adapter.setDatabasePlatform("org.hibernate.dialect.MySQLDialect");
factory.setJpaVendorAdapter(adapter);
return factory;
}
the way it works is that the thread fired by the user iterates over a collection of plans and applies each of them, the process that applies the plan also updates the progress entity on the DB.
the whole thread bean is marked at transactional:
#Component
#Scope("prototype")
#Transactional(propagation = Propagation.REQUIRES_NEW, isolation = Isolation.READ_COMMITTED)
public class TemplatePlanApplicationThreadBean extends AbstractPlentyThread implements TemplatePlanApplicationThread {
...
#Override
public void run() {
startProcessing();
try {
logger.trace(
"---Starting plan manifestation for " + fieldCropReplaceDatePlantationDates.size() + " fields---");
List<Plan> plans = new ArrayList<>();
for (FieldCropReplaceDatePlantationDates obj : fieldCropReplaceDatePlantationDates) {
for (TemplatePlan templatePlan : obj.getTemplatePlans()) {
try {
plans.add(planService.findActivePlanAndManifestTemplatePlan(templatePlan, organization,
obj.getPlantationDate(), obj.getReplacementStartDate(), obj.getFieldCrop(),
autoSchedule, schedulerRequestArguments, planApplicationProgress, false));
} catch (ActivityException e) {
throw new IllegalArgumentException(e);
}
}
Plan plan = plans.get(plans.size() - 1);
plan = planService.getEntityById(plan.getId());
if (plan != null) {
planService.setUnscheduledPlanAsSelected(plan);
}
plans.clear();
}
if (planApplicationProgressService.getEntityById(planApplicationProgress.getId()) != null) {
planApplicationProgressService.deleteEntity(planApplicationProgress.getId());
}
} catch (Exception e) {
logger.error(PlentyUtils.extrapolateStackTrace(e));
connector.createIssue("RdThreeLLC", "plenty-web",
new GitHubIssueRequest("Template plan application failure",
"```\n" + PlentyUtils.extrapolateStackTrace(e) + "\n```", 1, new ArrayList<>(),
Lists.newArrayList("plentytickets")));
planApplicationProgress.setFailed(true);
planApplicationProgressService.saveEntity(planApplicationProgress);
} finally {
endProcessing();
}
}
here is the method called by the thread:
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW)
public synchronized Plan findActivePlanAndManifestTemplatePlan(TemplatePlan templatePlan,
ServiceProviderOrganization organization, Date plantationDate, Date replacementDate, FieldCrop fieldCrop,
boolean autoSchedule, SchedulerRequestArguments schedulerRequestArguments, Progress planProgress,
boolean commit) throws ActivityException {
Plan oldPlan = getLatestByFieldCrop(fieldCrop);
List<Activity> activitiesToRemove = oldPlan != null
? findActivitiesToRemoveAfterDate(oldPlan, replacementDate != null ? replacementDate : new Date())
: new ArrayList<>();
List<PlanExpense> planExpensesToRemove = oldPlan != null
? findPlanExpensesToRemoveAfterDate(oldPlan, replacementDate != null ? replacementDate : new Date())
: new ArrayList<>();
Date oldPlanPlantationDate = oldPlan != null ? inferPlanDates(oldPlan).getPlantationDate() : null;
if (oldPlan != null) {
if (commit) {
oldPlan.setReplaced(true);
}
buildPlanProfitProjectionForPlanAndField(oldPlan, Sets.newHashSet(activitiesToRemove),
Sets.newHashSet(planExpensesToRemove));
}
if (commit) {
for (Activity activity : activitiesToRemove) {
activityService.deleteEntity(activity.getId());
}
for (PlanExpense planExpense : planExpensesToRemove) {
planExpenseService.deleteEntity(planExpense.getId());
}
}
oldPlan = oldPlan != null ? getEntityById(oldPlan.getId()) : null;
Plan plan = manifestTemplatePlan(templatePlan, oldPlan, organization,
plantationDate != null ? plantationDate : oldPlanPlantationDate, replacementDate, fieldCrop,
autoSchedule, schedulerRequestArguments, planProgress, commit);
if (!commit) {
setPlanAllocationsUnscheduled(plan);
}
return plan;
}
the thing that kills me is that the error happens only sometimes, so i can't really debug it and i can't correlate it with anything.
Any ideas about what would cause the session to close?
i tried turning off any other threads, so basically this is the only one that's running, didn't help
Thanks

Related

Cannot acquire lock exception in lettuce

We have recently moved form jedis to using lettuce in our production services. However we have hit a roadblock while creating redis distributed locks
We are using non clustered setup of aws elasticache with one master and 2 read repilcas
Configs :
Spring-boot : 2.2.5
spring-boot-starter-data-redis : 2.2.5
spring-data-redis : 2.2.5
spring-integration-redis : 5.2.4
redis : 5.0.6
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
GenericObjectPoolConfig poolingConfig = new GenericObjectPoolConfig();
poolingConfig.setMaxIdle(Integer.valueOf(maxConnections));
poolingConfig.setMaxTotal(Integer.valueOf(maxIdleConnections));
poolingConfig.setMinIdle(Integer.valueOf(minIdleConnections));
poolingConfig.setMaxWaitMillis(-1);
final SocketOptions socketOptions = SocketOptions.builder().connectTimeout(Duration.ofSeconds(10)).build();
final ClientOptions clientOptions = ClientOptions.builder().socketOptions(socketOptions).build();
LettucePoolingClientConfiguration clientOption = LettucePoolingClientConfiguration.builder()
.poolConfig(poolingConfig).readFrom(ReadFrom.REPLICA_PREFERRED)
.commandTimeout(Duration.ofMillis(Long.valueOf(commandTimeout)))
.clientOptions(clientOptions).useSsl().build();
RedisStaticMasterReplicaConfiguration redisStaticMasterReplicaConfiguration = new RedisStaticMasterReplicaConfiguration(
primaryEndPoint, Integer.valueOf(port));
redisStaticMasterReplicaConfiguration.addNode(readerEndPoint, Integer.valueOf(port));
redisStaticMasterReplicaConfiguration.setPassword(password);
/*
* LettuceClientConfiguration clientConfig = LettuceClientConfiguration
* .builder() .useSsl()
*
* .readFrom(new ReadFrom() {
*
* #Override public List<RedisNodeDescription> select(Nodes nodes) {
* List<RedisNodeDescription> allNodes = nodes.getNodes(); int ind =
* Math.abs(index.incrementAndGet() % allNodes.size()); RedisNodeDescription
* selected = allNodes.get(ind);
* //logger.info("Selected random node {} with uri {}", ind, selected.getUri());
* List<RedisNodeDescription> remaining = IntStream.range(0, allNodes.size())
* .filter(i -> i != ind) .mapToObj(allNodes::get).collect(Collectors.toList());
* return Stream.concat( Stream.of(selected), remaining.stream()
* ).collect(Collectors.toList()); } }) .build();
*/
return new LettuceConnectionFactory(redisStaticMasterReplicaConfiguration, clientOption);
}
#Bean
public StringRedisTemplate stringRedisTemplate() {
return new StringRedisTemplate(redisConnectionFactory());
}
LOCKING SERVICE
#Service
public class RedisLockService {
#Autowired
RedisConnectionFactory redisConnectionFactory;
private static final Logger LOGGER = LoggerFactory.getLogger(RedisLockService.class);
public Lock obtainLock(String registryKey,String redisKey,Long lockExpiry){
try{
RedisLockRegistry registry = new RedisLockRegistry(redisConnectionFactory, registryKey, lockExpiry);
Lock lock = registry.obtain(redisKey);
if(lock.tryLock()==false)
{
LOGGER.info("Lock already made");
return null;
}
else
return lock;
}catch (Exception e) {
LOGGER.warn("Unable to acquire lock: ", e);
return null;
}
}
public void unLock(Lock lock) {
if(lock!=null)
lock.unlock();
}
}
error We are getting while trying to call obtainLock function
.RedisSystemException: Error in execution; nested exception is io.lettuce.core.RedisCommandExecutionException: ERR Error running script (call to f_8426c8df41c64d8177dce3ecbbe9146ef3759cd2): #user_script:6: #user_script: 6: -READONLY You can't write against a read only replica.
at org.springframework.integration.redis.util.RedisLockRegistry$RedisLock.rethrowAsLockException(RedisLockRegistry.java:224)
at org.springframework.integration.redis.util.RedisLockRegistry$RedisLock.tryLock(RedisLockRegistry.java:276)
at org.springframework.integration.redis.util.RedisLockRegistry$RedisLock.tryLock(Re
you need to connect to the master read/write node of Redis

Unit test in Spring boot using Mockito

While executing Juint4 test. it shows null pointer exception. while using save method in unit test it returns null. Here i am using Mockito Juint4 Testing to mock the method. someone help me out with this.
**Service Method.**
public Result save(Map inputParams){
Result result = new Result();
logger.info("::::::::::::::: save ::::::::::::::::"+inputParams);
try{
String name = inputParams.get("name").toString();
String type = inputParams.get("type").toString();
CoreIndustry coreIndustry = coreIndustryDao.findByName(name);
if(coreIndustry != null){
result.setStatusCode(HttpStatus.FOUND.value());
result.setMessage(Messages.NAME_EXIST_MESSAGE);
result.setSuccess(false);
}else{
CoreIndustry coreIndustryNew = new CoreIndustry();
coreIndustryNew.setName(name);
coreIndustryNew.setType(type);
coreIndustryNew.setInfo(new Gson().toJson(inputParams.get("info")));
System.out.println("CoreIndustry Info is :............:.............:..............:"+coreIndustryNew.getInfo());
CoreIndustry coreIndustryData = coreIndustryDao.save(coreIndustryNew);
System.out.println("Saved Data Is.............::::::::::::::::::::................ "+coreIndustryData.getName()+" "+coreIndustryData.getType()+" "+coreIndustryData.getType());
result.setData(coreIndustryData);
result.setStatusCode(HttpStatus.OK.value());
result.setMessage(Messages.CREATE_MESSAGE);
result.setSuccess(true);
}
}catch (Exception e){
logger.error("::::::::::::::: Exception ::::::::::::::::"+e.getMessage());
result.setStatusCode(HttpStatus.INTERNAL_SERVER_ERROR.value());
result.setSuccess(false);
result.setMessage(e.getMessage());
}
return result;
}
**Controller**
#PostMapping(path = "/industry/save")
public Result save(#RequestBody Map<String, Object> stringToParse) {
logger.debug("save---------------"+stringToParse);
Result result = industryService.save(stringToParse);
return result;
}
**Unit Test**
#RunWith(SpringRunner.class)
#SpringBootTest
public class IndustryServiceTest {
#MockBean
private CoreIndustryDao coreIndustryDao;
private IndustryService industryService;
#Test
public void getAll() {
System.out.println("::::::: Inside of GetAll Method of Controller.");
// when(coreIndustryDao.findAll()).thenReturn(Stream.of(
// new CoreIndustry("Dilip","Brik","Brik Industry"))
// .collect(Collectors.toList()));
//assertEquals(1,industryService.getAll().setData());
}
#Test
public void save() {
ObjectMapper oMapper = new ObjectMapper();
CoreIndustry coreIndustry = new CoreIndustry();
coreIndustry.setId(2L);
coreIndustry.setName("Dilip");
coreIndustry.setType("Business");
HashMap<String,Object> map = new HashMap();
map.put("name","Retail");
map.put("type","Development");
coreIndustry.setInfo(new Gson().toJson(map));
when(coreIndustryDao.save(any(CoreIndustry.class))).thenReturn(new CoreIndustry());
Map<String, Object> actualValues = oMapper.convertValue(coreIndustry,Map.class);
System.out.println("CoreIndustry Filed values are........ : "+coreIndustry.getName()+" "+coreIndustry.getInfo());
Result created = industryService.save(actualValues);
CoreIndustry coreIndustryValue = (CoreIndustry) created.getData();
Map<String, Object> expectedValues = oMapper.convertValue(coreIndustryValue, Map.class);
System.out.println(" Getting Saved data from CoreIndustry........"+expectedValues);
System.out.println(" Getting Saved data from CoreIndustry........"+coreIndustryValue.getName());
assertThat(actualValues).isSameAs(expectedValues);
}
I am new in this Spring boot Technology.
After Running the source code for save method.
After Debugging my source code..
It will be great please to help me out. Thank you.

Intermittent SocketTimeoutException with elasticsearch-rest-client-7.2.0

I am using RestHighLevelClient version 7.2 to connect to the ElasticSearch cluster version 7.2. My cluster has 3 Master nodes and 2 data nodes. Data node memory config: 2 core and 8 GB. I have used to below code in my spring boot project to create RestHighLevelClient instance.
#Bean(destroyMethod = "close")
#Qualifier("readClient")
public RestHighLevelClient readClient(){
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(elasticUser, elasticPass));
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider).setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build()));
builder.setRequestConfigCallback(requestConfigBuilder -> requestConfigBuilder.setConnectTimeout(30000).setSocketTimeout(60000)
);
RestHighLevelClient restClient = new RestHighLevelClient(builder);
return restClient;
}
RestHighLevelClient is a singleton bean. Intermittently I am getting SocketTimeoutException with both GET and PUT request. The index size is around 50 MB. I have tried increasing the socket timeout value, but still, I receive the same error. Am I missing some configuration? Any help would be appreciated.
I got the issue just wanted to share so that it can help others.
I was using Load Balancer to connect to the ElasticSerach Cluster.
As you can see from my RestClientBuilder code that I was using only the loadbalancer host and port. Although I have multiple master node, still RestClient was not retrying my request in case of connection timeout.
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider).setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build()));
According to the RestClient code if we use a single host then it won't retry in case of any connection issue.
So I changed my code as below and it started working.
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, 9200),new HttpHost(elasticHost, 9201))).setHttpClientConfigCallback(httpClientBuilder -> httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider));
For complete RestClient code please refer https://github.com/elastic/elasticsearch/blob/master/client/rest/src/main/java/org/elasticsearch/client/RestClient.java
Retry code block in RestClient
private Response performRequest(final NodeTuple<Iterator<Node>> nodeTuple,
final InternalRequest request,
Exception previousException) throws IOException {
RequestContext context = request.createContextForNextAttempt(nodeTuple.nodes.next(), nodeTuple.authCache);
HttpResponse httpResponse;
try {
httpResponse = client.execute(context.requestProducer, context.asyncResponseConsumer, context.context, null).get();
} catch(Exception e) {
RequestLogger.logFailedRequest(logger, request.httpRequest, context.node, e);
onFailure(context.node);
Exception cause = extractAndWrapCause(e);
addSuppressedException(previousException, cause);
if (nodeTuple.nodes.hasNext()) {
return performRequest(nodeTuple, request, cause);
}
if (cause instanceof IOException) {
throw (IOException) cause;
}
if (cause instanceof RuntimeException) {
throw (RuntimeException) cause;
}
throw new IllegalStateException("unexpected exception type: must be either RuntimeException or IOException", cause);
}
ResponseOrResponseException responseOrResponseException = convertResponse(request, context.node, httpResponse);
if (responseOrResponseException.responseException == null) {
return responseOrResponseException.response;
}
addSuppressedException(previousException, responseOrResponseException.responseException);
if (nodeTuple.nodes.hasNext()) {
return performRequest(nodeTuple, request, responseOrResponseException.responseException);
}
throw responseOrResponseException.responseException;
}
I'm facing the same issue, and seeing this I realized that the retry is happening on my side too in each host (I have 3 host and the exception happens in 3 threads). I wanted to post it since you might face the same issue or someone else might come to this post because of the same SocketConnection Exception.
Searching the official docs, the HighLevelRestClient uses under the hood the RestClient, and the RestClient uses CloseableHttpAsyncClient which have a connection pool. ElasticSearch specifies that you should close the connection once that you are done, (which sounds ambiguous the definition of "done" in an application), but in general in internet I have found that you should close it when the application is closing or ending, rather than when you finished querying.
Now on the official documentation of apache they have an example to handle the connection pool, which i'm trying to follow, I'll try to replicate the scenario and will post if that fixes my issue, the code can be found here:
https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/examples/org/apache/http/examples/nio/client/AsyncClientEvictExpiredConnections.java
This is what i have so far:
#Bean(name = "RestHighLevelClientWithCredentials", destroyMethod = "close")
public RestHighLevelClient elasticsearchClient(ElasticSearchClientConfiguration elasticSearchClientConfiguration,
RestClientBuilder.HttpClientConfigCallback httpClientConfigCallback) {
return new RestHighLevelClient(
RestClient
.builder(getElasticSearchHosts(elasticSearchClientConfiguration))
.setHttpClientConfigCallback(httpClientConfigCallback)
);
}
#Bean
#RefreshScope
public RestClientBuilder.HttpClientConfigCallback getHttpClientConfigCallback(
PoolingNHttpClientConnectionManager poolingNHttpClientConnectionManager,
CredentialsProvider credentialsProvider
) {
return httpAsyncClientBuilder -> {
httpAsyncClientBuilder.setSSLHostnameVerifier(NoopHostnameVerifier.INSTANCE);
httpAsyncClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
httpAsyncClientBuilder.setConnectionManager(poolingNHttpClientConnectionManager);
return httpAsyncClientBuilder;
};
}
public class ElasticSearchClientManager {
private ElasticSearchClientManager.IdleConnectionEvictor idleConnectionEvictor;
/**
* Custom client connection manager to create a connection watcher
*
* #param elasticSearchClientConfiguration elasticSearchClientConfiguration
* #return PoolingNHttpClientConnectionManager
*/
#Bean
#RefreshScope
public PoolingNHttpClientConnectionManager getPoolingNHttpClientConnectionManager(
ElasticSearchClientConfiguration elasticSearchClientConfiguration
) {
try {
SSLIOSessionStrategy sslSessionStrategy = new SSLIOSessionStrategy(getTrustAllSSLContext());
Registry<SchemeIOSessionStrategy> sessionStrategyRegistry = RegistryBuilder.<SchemeIOSessionStrategy>create()
.register("http", NoopIOSessionStrategy.INSTANCE)
.register("https", sslSessionStrategy)
.build();
ConnectingIOReactor ioReactor = new DefaultConnectingIOReactor();
PoolingNHttpClientConnectionManager poolingNHttpClientConnectionManager =
new PoolingNHttpClientConnectionManager(ioReactor, sessionStrategyRegistry);
idleConnectionEvictor = new ElasticSearchClientManager.IdleConnectionEvictor(poolingNHttpClientConnectionManager,
elasticSearchClientConfiguration);
idleConnectionEvictor.start();
return poolingNHttpClientConnectionManager;
} catch (IOReactorException e) {
throw new RuntimeException("Failed to create a watcher for the connection pool");
}
}
private SSLContext getTrustAllSSLContext() {
try {
return new SSLContextBuilder()
.loadTrustMaterial(null, (x509Certificates, string) -> true)
.build();
} catch (Exception e) {
throw new RuntimeException("Failed to create SSL Context with open certificate", e);
}
}
public IdleConnectionEvictor.State state() {
return idleConnectionEvictor.evictorState;
}
#PreDestroy
private void finishManager() {
idleConnectionEvictor.shutdown();
}
public static class IdleConnectionEvictor extends Thread {
private final NHttpClientConnectionManager nhttpClientConnectionManager;
private final ElasticSearchClientConfiguration elasticSearchClientConfiguration;
#Getter
private State evictorState;
private volatile boolean shutdown;
public IdleConnectionEvictor(NHttpClientConnectionManager nhttpClientConnectionManager,
ElasticSearchClientConfiguration elasticSearchClientConfiguration) {
super();
this.nhttpClientConnectionManager = nhttpClientConnectionManager;
this.elasticSearchClientConfiguration = elasticSearchClientConfiguration;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(elasticSearchClientConfiguration.getExpiredConnectionsCheckTime());
// Close expired connections
nhttpClientConnectionManager.closeExpiredConnections();
// Optionally, close connections
// that have been idle longer than 5 sec
nhttpClientConnectionManager.closeIdleConnections(elasticSearchClientConfiguration.getMaxTimeIdleConnections(),
TimeUnit.SECONDS);
this.evictorState = State.RUNNING;
}
}
} catch (InterruptedException ex) {
this.evictorState = State.NOT_RUNNING;
}
}
private void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}
public enum State {
RUNNING,
NOT_RUNNING
}
}
}

If and else block is executed during spring method annotated as Transactional

When I go to /confirmation-account link, in tomcat console I can see that if and else block is also executed. I can see:
print from ColorConsoleHelper.getGreenLog("loginView") and from ColorConsoleHelper.getGreenLog("confirmationAccountView")
This is really strange behavior. Why?
#RequestMapping(value = "/confirmation-account", method = RequestMethod.GET)
#Transactional
public ModelAndView displayConfirmationAccountPage(ModelAndView modelAndView, #RequestParam Map<String, String> requestParams) {
final int ACTIVE_USER = 1;
// find the user associated with the confirmation token
UserEntity userEntity = userService.findUserByConfirmationToken(requestParams.get("token"));
// this should always be non-null but we check just in case
if (userEntity!=null) {
// set the confirmation token to null so it cannot be used again
userEntity.setConfirmationToken(null);
// set enabled user
userEntity.setEnabled(ACTIVE_USER);
// save data: (token to null and active user)
saveAll(userEntity.getTrainings());
/*
RedirectAttributes won't work with ModelAndView but returning a string from the redirecting handler method works.
*/
modelAndView.addObject("successMessage", "Konto zostało pomyślnie aktywowane!");
modelAndView.setViewName("loginView");
ColorConsoleHelper.getGreenLog("loginView");
} else {
ColorConsoleHelper.getGreenLog("confirmationAccountView");
modelAndView.addObject("errorMessage", "Link jest nieprawidłowy...");
modelAndView.setViewName("confirmationAccountView");
}
return modelAndView;
}
public void saveAll(List<TrainingUserEntity> trainingUserEntityList) {
for ( TrainingUserEntity trainingUserEntity : trainingUserEntityList) {
entityManagerService.mergeUsingPersistenceUnitB(trainingUserEntity);
}
}
public void mergeUsingPersistenceUnitB(Object object) {
EntityManager entityManager = getEntityManagerPersistenceUnitB();
EntityTransaction tx = null;
try {
tx = entityManager.getTransaction();
tx.begin();
entityManager.merge(object);
tx.commit();
}
catch (RuntimeException e) {
if ( tx != null && tx.isActive() ) tx.rollback();
throw e; // or display error message
}
finally {
entityManager.close();
}
}
Below solution & explanation:
Because of /confirmation-account link is invoke twice, what is caused by dynamic proxy and #Transactional method annotated in controller It is mandatory to check how many displayConfirmationAccountPage method is invoked. It is workaround.
What do you think it is good or not to annotated #Transactional controller method?

Spring SAML extension for multiple IDP'S

we are planning to use spring saml extension as SP into our application.
But the requirement with our application is we need to communicate with more than 1 IDP's
Could any one please provide me/direct me to the example where it uses multiple IDP's
I also would like to know spring saml extension supports what kind of IDPS like OPenAM/Ping federate/ADFs2.0 etc...
Thanks,
--Vikas
You need to have a class to maintain a list of metadatas of each Idp's - say you putting those metadatas in some list which will be shared across application by static method. I have something like below
NOTE- I am not copying all class as it is that I am having, so might came across minor issues which you should be able to resolve on your own,
public class SSOMetadataProvider {
public static List<MetadataProvider> metadataList() throws MetadataProviderException, XMLParserException, IOException, Exception {
logger.info("Starting : Loading Metadata Data for all SSO enabled companies...");
List<MetadataProvider> metadataList = new ArrayList<MetadataProvider>();
org.opensaml.xml.parse.StaticBasicParserPool parserPool = new org.opensaml.xml.parse.StaticBasicParserPool();
parserPool.initialize();
//Get XML from DB -> convertIntoInputStream -> pass below as const argument
InputStreamMetadataProvider inputStreamMetadata = null;
try {
//Getting list from DB
List companyList = someServiceClass.getAllSSOEnabledCompanyDTO();
if(companyList!=null){
for (Object obj : companyList) {
CompanyDTO companyDTO = (CompanyDTO) obj;
if (companyDTO != null && companyDTO.getCompanyid() > 0 && companyDTO.getSsoSettingsDTO()!=null && !StringUtil.isNullOrEmpty(companyDTO.getSsoSettingsDTO().getSsoMetadataXml())) {
logger.info("Loading Metadata for Company : "+companyDTO.getCompanyname()+" , companyId : "+companyDTO.getCompanyid());
inputStreamMetadata = new InputStreamMetadataProvider(companyDTO.getSsoSettingsDTO().getSsoMetadataXml());
inputStreamMetadata.setParserPool(parserPool);
inputStreamMetadata.initialize();
//ExtendedMetadataDelegateWrapper extMetadaDel = new ExtendedMetadataDelegateWrapper(inputStreamMetadata , new org.springframework.security.saml.metadata.ExtendedMetadata());
SSOMetadataDelegate extMetadaDel = new SSOMetadataDelegate(inputStreamMetadata , new org.springframework.security.saml.metadata.ExtendedMetadata()) ;
extMetadaDel.initialize();
extMetadaDel.setTrustFiltersInitialized(true);
metadataList.add(extMetadaDel);
logger.info("Loading Metadata bla bla");
}
}
}
} catch (MetadataProviderException | IOException | XMLParserException mpe){
logger.warn(mpe);
throw mpe;
}
catch (Exception e) {
logger.warn(e);
}
logger.info("Finished : Loading Metadata Data for all SSO enabled companies...");
return metadataList;
}
InputStreamMetadataProvider.java
public class InputStreamMetadataProvider extends AbstractReloadingMetadataProvider implements Serializable
{
public InputStreamMetadataProvider(String metadata) throws MetadataProviderException
{
super();
//metadataInputStream = metadata;
metadataInputStream = SSOUtil.getIdpAsStream(metadata);
}
#Override
protected byte[] fetchMetadata() throws MetadataProviderException
{
byte[] metadataBytes = metadataInputStream ;
if(metadataBytes.length>0)
return metadataBytes;
else
return null;
}
public byte[] getMetadataInputStream() {
return metadataInputStream;
}
}
SSOUtil.java
public class SSOUtil {
public static byte[] getIdpAsStream(String metadatXml) {
return metadatXml.getBytes();
}
}
After user request to fetch metadata for their company's metadata, get MetaData for entityId for each IdPs -
SSOCachingMetadataManager.java
public class SSOCachingMetadataManager extends CachingMetadataManager{
#Override
public ExtendedMetadata getExtendedMetadata(String entityID) throws MetadataProviderException {
ExtendedMetadata extendedMetadata = null;
try {
//UAT Defect Fix - org.springframework.security.saml.metadata.ExtendedMetadataDelegate cannot be cast to biz.bsite.direct.spring.app.sso.ExtendedMetadataDelegate
//List<MetadataProvider> metadataList = (List<MetadataProvider>) GenericCache.getInstance().getCachedObject("ssoMetadataList", List.class.getClassLoader());
List<MetadataProvider> metadataList = SSOMetadataProvider.metadataList();
log.info("Retrieved Metadata List from Cassendra Cache size is :"+ (metadataList!=null ? metadataList.size(): 0) );
org.opensaml.xml.parse.StaticBasicParserPool parserPool = new org.opensaml.xml.parse.StaticBasicParserPool();
parserPool.initialize();
if(metadataList!=null){
//metadataList.addAll(getAvailableProviders());
//metadataList.addAll(getProviders());
//To remove duplicate entries from list, if any
Set<MetadataProvider> hs = new HashSet<MetadataProvider> ();
hs.addAll(metadataList);
metadataList.clear();
metadataList.addAll(hs);
//setAllProviders(metadataList);
//setTrustFilterInitializedToTrue();
//refreshMetadata();
}
if(metadataList!=null && metadataList.size()>0) {
for(MetadataProvider metadataProvider : metadataList){
log.info("metadataProvider instance of ExtendedMetadataDelegate: Looking for entityId"+entityID);
SSOMetadataDelegate ssoMetadataDelegate = null;
ExtendedMetadataDelegateWrapper extMetadaDel = null;
// extMetadaDel.getDelegate()
if(metadataProvider instanceof SSOMetadataDelegate)
{ssoMetadataDelegate = (SSOMetadataDelegate) metadataProvider;
((InputStreamMetadataProvider)ssoMetadataDelegate.getDelegate()).setParserPool(parserPool);
((InputStreamMetadataProvider)ssoMetadataDelegate.getDelegate()).initialize();
ssoMetadataDelegate.initialize();
ssoMetadataDelegate.setTrustFiltersInitialized(true);
if(!isMetadataAlreadyExist(ssoMetadataDelegate))
addMetadataProvider(ssoMetadataDelegate);
extMetadaDel = new ExtendedMetadataDelegateWrapper(ssoMetadataDelegate.getDelegate() , new org.springframework.security.saml.metadata.ExtendedMetadata());
}
else
extMetadaDel = new ExtendedMetadataDelegateWrapper(metadataProvider, new org.springframework.security.saml.metadata.ExtendedMetadata());
extMetadaDel.initialize();
extMetadaDel.setTrustFiltersInitialized(true);
extMetadaDel.initialize();
refreshMetadata();
extendedMetadata = extMetadaDel.getExtendedMetadata(entityID);
}
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if(extendedMetadata!=null)
return extendedMetadata;
else{
return super.getExtendedMetadata(entityID);
}
}
private boolean isMetadataAlreadyExist(SSOMetadataDelegate ssoMetadataDelegate) {
boolean isExist = false;
for(ExtendedMetadataDelegate item : getAvailableProviders()){
if (item.getDelegate() != null && item.getDelegate() instanceof SSOMetadataDelegate) {
SSOMetadataDelegate that = (SSOMetadataDelegate) item.getDelegate();
try {
log.info("This Entity ID: "+ssoMetadataDelegate.getMetadata()!=null ? ((EntityDescriptorImpl)ssoMetadataDelegate.getMetadata()).getEntityID() : "nullEntity"+
"That Entity ID: "+that.getMetadata()!=null ? ((EntityDescriptorImpl)that.getMetadata()).getEntityID() : "nullEntity");
EntityDescriptorImpl e = (EntityDescriptorImpl) that.getMetadata();
isExist = this.getMetadata()!=null ? ((EntityDescriptorImpl)ssoMetadataDelegate.getMetadata()).getEntityID().equals(e.getEntityID()) : false;
if(isExist)
return isExist;
} catch (MetadataProviderException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
}
return isExist;
}
Add entry in ur Spring bean xml
<bean id="metadata" class="pkg.path.SSOCachingMetadataManager">
<constructor-arg name="providers" value="#{ssoMetadataProvider.metadataList()}">
</constructor-arg>
<property name="RefreshCheckInterval" value="-1"/>
<property name="RefreshRequired" value="false"/>
</bean>
Let me know incase of any concerns.
I have recently configured two IDPs for Spring SAML extension. Here we should follow one basic rule. For each IDP we want to add, we have to configure one IDP provider as well as one SP provider. We should configure the providers in a MetadataManager bean, CachingMetadataManager for example. Here are some code snippets to get the idea what I am trying to say about:
public void addProvider(String providerMetadataUrl, String idpEntityId, String spEntityId, String alias) {
addIDPMetadata(providerMetadataUrl, idpEntityId, alias);
addSPMetadata(spEntityId, alias);
}
public void addIDPMetadata(String providerMetadataUrl, String idpEntityId, String alias) {
try {
if (metadata.getIDPEntityNames().contains(idpEntityId)) {
return;
}
metadata.addMetadataProvider(extendedMetadataProvider(providerMetadataUrl, alias));
} catch (MetadataProviderException e1) {
log.error("Error initializing metadata", e1);
}
}
public void addSPMetadata(String spEntityId, String alias) {
try {
if (metadata.getSPEntityNames().contains(spEntityId)) {
return;
}
MetadataGenerator generator = new MetadataGenerator();
generator.setEntityId(spEntityId);
generator.setEntityBaseURL(baseURL);
generator.setExtendedMetadata(extendedMetadata(alias));
generator.setIncludeDiscoveryExtension(true);
generator.setKeyManager(keyManager);
EntityDescriptor descriptor = generator.generateMetadata();
ExtendedMetadata extendedMetadata = generator.generateExtendedMetadata();
MetadataMemoryProvider memoryProvider = new MetadataMemoryProvider(descriptor);
memoryProvider.initialize();
MetadataProvider metadataProvider = new ExtendedMetadataDelegate(memoryProvider, extendedMetadata);
metadata.addMetadataProvider(metadataProvider);
metadata.setHostedSPName(descriptor.getEntityID());
metadata.refreshMetadata();
} catch (MetadataProviderException e1) {
log.error("Error initializing metadata", e1);
}
}
public ExtendedMetadataDelegate extendedMetadataProvider(String providerMetadataUrl, String alias)
throws MetadataProviderException {
HTTPMetadataProvider provider = new HTTPMetadataProvider(this.bgTaskTimer, httpClient, providerMetadataUrl);
provider.setParserPool(parserPool);
ExtendedMetadataDelegate delegate = new ExtendedMetadataDelegate(provider, extendedMetadata(alias));
delegate.setMetadataTrustCheck(true);
delegate.setMetadataRequireSignature(false);
return delegate;
}
private ExtendedMetadata extendedMetadata(String alias) {
ExtendedMetadata exmeta = new ExtendedMetadata();
exmeta.setIdpDiscoveryEnabled(true);
exmeta.setSignMetadata(false);
exmeta.setEcpEnabled(true);
if (alias != null && alias.length() > 0) {
exmeta.setAlias(alias);
}
return exmeta;
}
You can find all answers to your question in the Spring SAML manual.
The sample application which is included as part of the product already includes metadata for two IDPs, use it as an example.
Statement on IDPs is included in chapter 1.2:
All products supporting SAML 2.0 in Identity Provider mode (e.g. ADFS
2.0, Shibboleth, OpenAM/OpenSSO, Efecte Identity or Ping Federate) can be used with the extension.

Resources