Configuring Quartz2 apache Camel and Fuse - spring

I have the next part of code in Camel
<route id="ROUTE-WK-OPSWEBGUSOXUA" streamCache="true">
<from id="_from1" uri="{{opsweb.gusoxUA.timer.Endpoint}}?cron={{opsweb.gusoxUA.cron.timer}}&stateful=true">
<description>Principal Route of wk-opswebGusoxUA</description>
</from>
<bean id="_bean1" method="queryOpsweb" ref="jdbcTemplateOpsweb">
<description>query pesobalance.soc_users_opsweb where user_state = 'A'</description>
</bean>
<doTry id="_doTry1">
<choice id="_choice1">
<when id="_when1">
<simple>${body} == '[]'</simple>
<log id="_log2" loggerRef="loggerRef"
loggingLevel="INFO" message="Result query dbo.sistemas empty => ${body} "/>
<!-- Lanzar una Excepcion -->
</when>
<otherwise id="_otherwise1">
<log id="_log3" loggerRef="loggerRef"
loggingLevel="INFO" message="Empieza Splitter "/>
<split id="_split1" stopOnException="false">
<simple>${body}</simple>
<process id="_process2" ref="csvTrasformationProcessor"/>
<log id="_log6" loggerRef="loggerRef"
loggingLevel="INFO" message="Register of ApplicationAfter => ${body}"/>
<marshal id="_marshall1" ref="dataModel"/>
<to id="_to2" uri="file:{{opsweb.gusoxUA.file.location}}?fileName=${date:now:yyyyMMdd}_opsweb.csv&fileExist=Append"/>
</split>
But quartz create 10 threads and execute 10 times and I want to only execute 1 time.
When I deploy it in fuse i see the next:
2020-04-21 14:58:50,081 | INFO | xtenderThread-46 | ManagedManagementStrategy | 172 - org.apache.camel.camel-core - 2.15.1.redhat-621084 | JMX is enabled
2020-04-21 14:58:50,112 | INFO | xtenderThread-46 | QuartzComponent | 222 - org.apache.camel.camel-quartz2 - 2.15.1.redhat-621084 | Create and initializing scheduler.
2020-04-21 14:58:50,113 | INFO | xtenderThread-46 | QuartzComponent | 222 - org.apache.camel.camel-quartz2 - 2.15.1.redhat-621084 | Setting org.quartz.scheduler.jmx.export=true to ensure QuartzScheduler(s) will be enlisted in JMX.
2020-04-21 14:58:50,114 | INFO | xtenderThread-46 | StdSchedulerFactory | 272 - org.quartz-scheduler.quartz - 2.2.1 | Using default implementation for ThreadExecutor
2020-04-21 14:58:50,114 | INFO | xtenderThread-46 | SimpleThreadPool | 272 - org.quartz-scheduler.quartz - 2.2.1 | Job execution threads will use class loader of thread: SpringOsgiExtenderThread-46
2020-04-21 14:58:50,115 | INFO | xtenderThread-46 | SchedulerSignalerImpl | 272 - org.quartz-scheduler.quartz - 2.2.1 | Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
2020-04-21 14:58:50,115 | INFO | xtenderThread-46 | QuartzScheduler | 272 - org.quartz-scheduler.quartz - 2.2.1 | Quartz Scheduler v.2.2.1 created.
2020-04-21 14:58:50,115 | INFO | xtenderThread-46 | RAMJobStore | 272 - org.quartz-scheduler.quartz - 2.2.1 | RAMJobStore initialized.
2020-04-21 14:58:50,116 | INFO | xtenderThread-46 | QuartzScheduler | 272 - org.quartz-scheduler.quartz - 2.2.1 | Scheduler meta-data: Quartz Scheduler (v2.2.1) 'DefaultQuartzScheduler-com.avianca.datacenter.sal.WK-opswebGusoxUA' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
How i can configure Quartz for only one execution instead 10.

I think this was fixed in a newer version of camel. You could try using a more recent release.

if you only want to execute once, timer is more suitable with repeatCount=1:
timer://timerName?repeatCount=1
if it is once every XX seconds, use quartz:
every 60 seconds.
quartz2://timerName?cron=0 0/1 * 1/1 * ? *
depends on the contents of {{opsweb.gusoxUA.cron.timer}}

Related

Spring session table not created in Docker container

When I'm launching my Spring application from my IDE (IntelliJ), there is no problem with the creation of all the tables including the Spring Session table.
However, when I'm launching it with the generated jar (with the same things inside) from my docker containers, it creates my tables excluding Spring Session one.
I receive this error, telling me that the spring session does not exist:
app | 09:43:33.837 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Pool stats (total=10, active=0, idle=10, waiting=0)
app | 09:43:33.837 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Fill pool skipped, pool is at sufficient level.
app | 09:44:00.005 [pool-1-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Creating new transaction with name [null]: PROPAGATION_REQUIRES_NEW,ISOLATION_DEFAULT
app | 09:44:00.006 [pool-1-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Opened new EntityManager [SessionImpl(1059542725<open>)] for JPA transaction
app | 09:44:00.014 [pool-1-thread-1] DEBUG org.hibernate.engine.transaction.internal.TransactionImpl - On TransactionImpl creation, JpaCompliance#isJpaTransactionComplianceEnabled == false
app | 09:44:00.015 [pool-1-thread-1] DEBUG org.hibernate.engine.transaction.internal.TransactionImpl - begin
app | 09:44:00.019 [pool-1-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Exposing JPA transaction as JDBC [org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle#3b3c4387]
app | 09:44:00.021 [pool-1-thread-1] DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL update
app | 09:44:00.022 [pool-1-thread-1] DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL statement [DELETE FROM SPRING_SESSION WHERE EXPIRY_TIME < ?]
db | 2021-09-04 09:44:00.037 UTC [82] ERROR: relation "spring_session" does not exist at character 13
db | 2021-09-04 09:44:00.037 UTC [82] STATEMENT: DELETE FROM SPRING_SESSION WHERE EXPIRY_TIME < $1
app | 09:44:00.280 [pool-1-thread-1] DEBUG org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loaded 11 bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml]
app | 09:44:00.281 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'DB2'
app | 09:44:00.289 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'Derby'
app | 09:44:00.291 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'H2'
app | 09:44:00.293 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'HDB'
app | 09:44:00.295 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'HSQL'
app | 09:44:00.297 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'Informix'
app | 09:44:00.298 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'MS-SQL'
app | 09:44:00.299 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'MySQL'
app | 09:44:00.300 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'Oracle'
app | 09:44:00.302 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'PostgreSQL'
app | 09:44:00.303 [pool-1-thread-1] DEBUG org.springframework.beans.factory.support.DefaultListableBeanFactory - Creating shared instance of singleton bean 'Sybase'
app | 09:44:00.305 [pool-1-thread-1] DEBUG org.springframework.jdbc.support.SQLErrorCodesFactory - Looking up default SQLErrorCodes for DataSource [com.zaxxer.hikari.HikariDataSource#77eb5790]
app | 09:44:00.308 [pool-1-thread-1] DEBUG org.springframework.jdbc.support.SQLErrorCodesFactory - SQL error codes for 'PostgreSQL' found
app | 09:44:00.309 [pool-1-thread-1] DEBUG org.springframework.jdbc.support.SQLErrorCodesFactory - Caching SQL error codes for DataSource [com.zaxxer.hikari.HikariDataSource#77eb5790]: database product name is 'PostgreSQL'
app | 09:44:00.310 [pool-1-thread-1] DEBUG org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator - Translating SQLException with SQL state '42P01', error code '0', message [ERROR: relation "spring_session" does not exist
app | Position: 13]; SQL was [DELETE FROM SPRING_SESSION WHERE EXPIRY_TIME < ?] for task [PreparedStatementCallback]
app | 09:44:00.312 [pool-1-thread-1] DEBUG org.springframework.transaction.support.TransactionTemplate - Initiating transaction rollback on application exception
app | org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [DELETE FROM SPRING_SESSION WHERE EXPIRY_TIME < ?]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "spring_session" does not exist
app | Position: 13
app | at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:239)
app | at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70)
app | at org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1541)
app | at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:667)
app | at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:960)
app | at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:1015)
app | at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:1025)
app | at org.springframework.session.jdbc.JdbcIndexedSessionRepository.lambda$cleanUpExpiredSessions$8(JdbcIndexedSessionRepository.java:587)
app | at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
app | at org.springframework.session.jdbc.JdbcIndexedSessionRepository.cleanUpExpiredSessions(JdbcIndexedSessionRepository.java:587)
app | at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
app | at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:95)
app | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
app | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
app | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
app | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
app | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
app | at java.base/java.lang.Thread.run(Thread.java:829)
app | Caused by: org.postgresql.util.PSQLException: ERROR: relation "spring_session" does not exist
app | Position: 13
app | at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2552)
app | at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2284)
app | at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:322)
app | at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:481)
app | at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:401)
app | at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:164)
app | at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:130)
app | at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
app | at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
app | at org.springframework.jdbc.core.JdbcTemplate.lambda$update$2(JdbcTemplate.java:965)
app | at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:651)
app | ... 14 common frames omitted
app | 09:44:00.316 [pool-1-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Initiating transaction rollback
app | 09:44:00.317 [pool-1-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Rolling back JPA transaction on EntityManager [SessionImpl(1059542725<open>)]
app | 09:44:00.318 [pool-1-thread-1] DEBUG org.hibernate.engine.transaction.internal.TransactionImpl - rolling back
app | 09:44:00.328 [pool-1-thread-1] DEBUG org.springframework.orm.jpa.JpaTransactionManager - Closing JPA EntityManager [SessionImpl(1059542725<open>)] after transaction
app | 09:44:00.330 [pool-1-thread-1] ERROR org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler - Unexpected error occurred in scheduled task
app | org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [DELETE FROM SPRING_SESSION WHERE EXPIRY_TIME < ?]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "spring_session" does not exist
app | Position: 13
app | at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:239)
app | at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70)
app | at org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1541)
app | at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:667)
app | at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:960)
app | at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:1015)
app | at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:1025)
app | at org.springframework.session.jdbc.JdbcIndexedSessionRepository.lambda$cleanUpExpiredSessions$8(JdbcIndexedSessionRepository.java:587)
app | at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
app | at org.springframework.session.jdbc.JdbcIndexedSessionRepository.cleanUpExpiredSessions(JdbcIndexedSessionRepository.java:587)
app | at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
app | at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:95)
app | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
app | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
app | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
app | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
app | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
app | at java.base/java.lang.Thread.run(Thread.java:829)
app | Caused by: org.postgresql.util.PSQLException: ERROR: relation "spring_session" does not exist
app | Position: 13
app | at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2552)
app | at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2284)
app | at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:322)
app | at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:481)
app | at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:401)
app | at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:164)
app | at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:130)
app | at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
app | at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
app | at org.springframework.jdbc.core.JdbcTemplate.lambda$update$2(JdbcTemplate.java:965)
app | at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:651)
app | ... 14 common frames omitted
Here is my configuration :
application.properties
server.port=8080
# Database configuration
spring.datasource.url = jdbc:postgresql://db:5432/mynotes
spring.datasource.username = postgres
spring.datasource.password = postgres
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect
# JPA configuration
# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto = create
#Pour ne pas avoir la ligne d'erreur : SOURCE "https://stackoverflow.com/questions/4588755/disabling-contextual-lob-creation-as-createclob-method-threw-error"
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults=false
# Session configuration
spring.session.store-type=jdbc
spring.session.jdbc.initialize-schema=always
spring.session.jdbc.schema=classpath:org/springframework/session/jdbc/schema-postgresql.sql
spring.session.jdbc.table-name=SPRING_SESSION # Name of the database table used to store sessions.
# Pour se connecter avec l'appli mobile
server.address=0.0.0.0
docker-compose.yml
version: '2'
services:
app:
image: "magnan/mynotes:${VERSION}"
build:
context: ./App
container_name: app
depends_on:
- db
ports:
- "8080:8080"
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/mynotes
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=postgres
- SPRING_JPA_HIBERNATE_DDL_AUTO=create
db:
image: 'postgres:13.1'
container_name: db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- ./Db/create_db_mynotes.sql:/docker-entrypoint-initdb.d/create_db_mynotes.sql
App/Dockerfile
FROM openjdk:11
VOLUME /tmp
COPY *.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
create_db_mynotes.sql
CREATE DATABASE mynotes;
GRANT ALL PRIVILEGES ON DATABASE mynotes TO postgres;
Do you have any ideas why this is happening in containers and not in IntelliJ and how to solve it ?
Thanks a lot for your help!
An example of how to create a new instance can be seen below:
JdbcTemplate jdbcTemplate = new JdbcTemplate();
// ... configure jdbcTemplate ...
TransactionTemplate transactionTemplate = new TransactionTemplate();
// ... configure transactionTemplate ...
JdbcIndexedSessionRepository sessionRepository =
new JdbcIndexedSessionRepository(jdbcTemplate, transactionTemplate);
For additional information on how to create and configure JdbcTemplate and TransactionTemplate, refer to the Spring Framework Reference Documentation.
By default, this implementation uses SPRING_SESSION and SPRING_SESSION_ATTRIBUTES tables to store sessions. Note that the table name can be customized using the setTableName(String) method. In that case the table used to store attributes will be named using the provided table name, suffixed with _ATTRIBUTES. Depending on your database, the table definition can be described as below:
CREATE TABLE SPRING_SESSION (
PRIMARY_ID CHAR(36) NOT NULL,
SESSION_ID CHAR(36) NOT NULL,
CREATION_TIME BIGINT NOT NULL,
LAST_ACCESS_TIME BIGINT NOT NULL,
MAX_INACTIVE_INTERVAL INT NOT NULL,
EXPIRY_TIME BIGINT NOT NULL,
PRINCIPAL_NAME VARCHAR(100),
CONSTRAINT SPRING_SESSION_PK PRIMARY KEY (PRIMARY_ID)
);
CREATE UNIQUE INDEX SPRING_SESSION_IX1 ON SPRING_SESSION (SESSION_ID);
CREATE INDEX SPRING_SESSION_IX2 ON SPRING_SESSION (EXPIRY_TIME);
CREATE INDEX SPRING_SESSION_IX3 ON SPRING_SESSION (PRINCIPAL_NAME);
CREATE TABLE SPRING_SESSION_ATTRIBUTES (
SESSION_PRIMARY_ID CHAR(36) NOT NULL,
ATTRIBUTE_NAME VARCHAR(200) NOT NULL,
ATTRIBUTE_BYTES BYTEA NOT NULL,
CONSTRAINT SPRING_SESSION_ATTRIBUTES_PK PRIMARY KEY (SESSION_PRIMARY_ID,
ATTRIBUTE_NAME),
CONSTRAINT SPRING_SESSION_ATTRIBUTES_FK FOREIGN KEY (SESSION_PRIMARY_ID)
REFERENCES SPRING_SESSION(PRIMARY_ID) ON DELETE CASCADE
);
CREATE INDEX SPRING_SESSION_ATTRIBUTES_IX1 ON SPRING_SESSION_ATTRIBUTES (SESSION_PRIMARY_ID);
Due to the differences between the various database vendors, especially when it comes to storing binary data, make sure to use SQL script specific to your database. Scripts for most major database vendors are packaged as org/springframework/session/jdbc/schema-*.sql, where * is the target database type.

Getting NullPonterException while trying to stop apache-activemq?

I am unable to shutdown my activemq gracefully after enabling jmx. Please help and tell me what am I doing wrong. Here is what I am trying to do.
start activemq:-
[mwapp#JMNGD1BAO150V02 ~]$ /app/apache-activemq-5.14.0/bin/activemq start xbean:/app/apache-activemq-5.14.0/conf/activemq-security.xml
INFO: Loading '/app/apache-activemq-5.14.0//bin/env'
INFO: Using java '/usr/java/jre1.7.0_79//bin/java'
INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details
INFO: pidfile created : '/app/apache-activemq-5.14.0//data/activemq.pid' (pid '16917')
activemq.log:- To me it's looking fine
2017-10-12 13:48:18,936 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#2142b533: startup date [Thu Oct 12 13:48:18 IST 2017]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2017-10-12 13:48:20,008 | INFO | Loading properties file from URL [file:/app/apache-activemq-5.14.0//conf/credentials-enc.properties] | org.jasypt.spring31.properties.EncryptablePropertyPlaceholderConfigurer | main
2017-10-12 13:48:20,975 | INFO | Loaded the Bouncy Castle security provider. | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:21,283 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/jms_nas/kahadb] | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:21,308 | INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi | org.apache.activemq.broker.jmx.ManagementContext | JMX connector
2017-10-12 13:48:21,594 | INFO | KahaDB is version 6 | org.apache.activemq.store.kahadb.MessageDatabase | main
2017-10-12 13:48:21,653 | INFO | Recovering from the journal #1105:27118028 | org.apache.activemq.store.kahadb.MessageDatabase | main
2017-10-12 13:48:21,657 | INFO | Recovery replayed 58 operations from the journal in 0.046 seconds. | org.apache.activemq.store.kahadb.MessageDatabase | main
2017-10-12 13:48:21,719 | INFO | PListStore:[/app/apache-activemq-5.14.0/data/localhost/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2017-10-12 13:48:21,903 | INFO | Apache ActiveMQ 5.14.0 (localhost, ID:JMNGD1BAO150V02-59661-1507796301746-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:22,786 | INFO | Listening for connections at: ssl://JMNGD1BAO150V02:61616?needClientAuth=true&maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2017-10-12 13:48:22,787 | INFO | Connector ssl started | org.apache.activemq.broker.TransportConnector | main
2017-10-12 13:48:22,787 | INFO | Apache ActiveMQ 5.14.0 (localhost, ID:JMNGD1BAO150V02-59661-1507796301746-0:1) started | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:22,787 | INFO | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:22,788 | WARN | Store limit is 102400 mb (current store usage is 1397 mb). The data directory: /jms_nas/kahadb only has 91534 mb of usable space. - resetting to maximum available disk space: 91534 mb | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:23,646 | INFO | No Spring WebApplicationInitializer types detected on classpath | /admin | main
2017-10-12 13:48:23,755 | INFO | ActiveMQ WebConsole available at http://localhost:8161/ | org.apache.activemq.web.WebConsoleStarter | main
2017-10-12 13:48:23,755 | INFO | ActiveMQ Jolokia REST API available at http://localhost:8161/api/jolokia/ | org.apache.activemq.web.WebConsoleStarter | main
2017-10-12 13:48:23,799 | INFO | Initializing Spring FrameworkServlet 'dispatcher' | /admin | main
2017-10-12 13:48:24,068 | INFO | No Spring WebApplicationInitializer types detected on classpath | /api | main
2017-10-12 13:48:24,185 | INFO | jolokia-agent: Using policy access restrictor classpath:/jolokia-access.xml | /api | main
stop activemq:-
[mwapp#JMNGD1BAO150V02 ~]$ /app/apache-activemq-5.14.0/bin/activemq stop xbean:/app/apache-activemq-5.14.0/conf/activemq-security.xml
INFO: Loading '/app/apache-activemq-5.14.0//bin/env'
INFO: Using java '/usr/java/jre1.7.0_79//bin/java'
INFO: Waiting at least 30 seconds for regular process termination of pid '16917' :
Java Runtime: Oracle Corporation 1.7.0_79 /usr/java/jre1.7.0_79
Heap sizes: current=63488k free=61608k max=932352k
JVM args: -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/app/apache-activemq-5.14.0//conf/login.config -Dactivemq.classpath=/app/apache-activemq-5.14.0//conf:/app/apache-activemq-5.14.0//../lib/: -Dactivemq.home=/app/apache-activemq-5.14.0/ -Dactivemq.base=/app/apache-activemq-5.14.0/ -Dactivemq.conf=/app/apache-activemq-5.14.0//conf -Dactivemq.data=/app/apache-activemq-5.14.0//data
Extensions classpath:
[/app/apache-activemq-5.14.0/lib,/app/apache-activemq-5.14.0/lib/camel,/app/apache-activemq-5.14.0/lib/optional,/app/apache-activemq-5.14.0/lib/web,/app/apache-activemq-5.14.0/lib/extra]
ACTIVEMQ_HOME: /app/apache-activemq-5.14.0
ACTIVEMQ_BASE: /app/apache-activemq-5.14.0
ACTIVEMQ_CONF: /app/apache-activemq-5.14.0/conf
ACTIVEMQ_DATA: /app/apache-activemq-5.14.0/data
Connecting to JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
ERROR: java.lang.NullPointerException
java.lang.NullPointerException
at org.apache.activemq.console.command.AbstractCommand.handleException(AbstractCommand.java:167)
at org.apache.activemq.console.command.AbstractJmxCommand.execute(AbstractJmxCommand.java:390)
at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:154)
at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)
at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.activemq.console.Main.runTaskClass(Main.java:262)
at org.apache.activemq.console.Main.main(Main.java:115)
...............................
INFO: Regular shutdown not successful, sending SIGKILL to process
INFO: sending SIGKILL to pid '16917'
As you can see, the system is forcefully shutting down using pid which is not expected. I am currently using apache-activemq-5.14.0 and the configuration file will look something like below. I am not sure why activemq gave two separate file to enable JMX i.e. env and activemq-security.xml. Or the env file has some different role to play. I read the documentation, and I got more confuse when they mention from V.5.12.0 onwards they are supporting OCSP. Do I need to enable that too?
${ACTIVEMQ_HOME}/bin/env
#!/bin/sh
# Active MQ installation dirs
# ACTIVEMQ_HOME="<Installationdir>/"
# ACTIVEMQ_BASE="$ACTIVEMQ_HOME"
# ACTIVEMQ_CONF="$ACTIVEMQ_BASE/conf"
# ACTIVEMQ_DATA="$ACTIVEMQ_BASE/data"
# ACTIVEMQ_TMP="$ACTIVEMQ_BASE/tmp"
ACTIVEMQ_OPTS_MEMORY="-Xms64M -Xmx1G"
if [ -z "$ACTIVEMQ_OPTS" ] ; then
ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS_MEMORY -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=$ACTIVEMQ_CONF/login.config"
fi
#ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS -Dorg.apache.activemq.audit=true"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.port=1099"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.password.file=${ACTIVEMQ_CONF}/jmx.password"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.access.file=${ACTIVEMQ_CONF}/jmx.access"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.ssl=true"
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote"
#ACTIVEMQ_SUNJMX_CONTROL="--jmxurl service:jmx:rmi:///jndi/rmi://127.0.0.1:1099/jmxrmi --jmxuser controlRole --jmxpassword abcd1234"
ACTIVEMQ_SUNJMX_CONTROL=""
if [ -z "$ACTIVEMQ_QUEUEMANAGERURL" ]; then
ACTIVEMQ_QUEUEMANAGERURL="--amqurl tcp://localhost:61616"
fi
if [ -z "$ACTIVEMQ_SSL_OPTS" ] ; then
#ACTIVEMQ_SSL_OPTS="-Djava.security.properties=$ACTIVEMQ_CONF/java.security"
ACTIVEMQ_SSL_OPTS=""
fi
#ACTIVEMQ_DEBUG_OPTS="-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005"
if [ -z "$ACTIVEMQ_KILL_MAXSECONDS" ]; then
ACTIVEMQ_KILL_MAXSECONDS=30
fi
ACTIVEMQ_USER=""
# ACTIVEMQ_PIDFILE="$ACTIVEMQ_DATA/activemq.pid"
JAVA_HOME="/usr/java/jre1.7.0_79/"
${ACTIVEMQ_HOME}/conf/activemq-security.xml
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
----------------------------------------------
<managementContext>
<!--managementContext createConnector="true" connectorPort="1099"/-->
<managementContext createConnector="true">
<property xmlns="http://www.springframework.org/schema/beans" name="environment">
<map xmlns="http://www.springframework.org/schema/beans">
<entry xmlns="http://www.springframework.org/schema/beans" key="jmx.remote.x.password.file" value="${activemq.base}/conf/jmx.password"/>
<entry xmlns="http://www.springframework.org/schema/beans" key="jmx.remote.x.access.file" value="${activemq.base}/conf/jmx.access"/>
</map>
</property>
</managementContext>
</managementContext>
----------------------------------------------
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook"/>
</shutdownHooks>
</broker>

Spring SerializingHttpMessageConverter - GZIPResponseStream error

In org.springframework.http.converter.AbstractHttpMessageConverter.write() is a call to
writeInternal(t, outputMessage);
outputMessage.getBody().flush();
in the Implementation in SerializingHttpMessageConverter.writeInternal() is a call to
objectStream.flush();
objectStream.close();
I guess this is the Reaseon for my Error in GZIPResponseStream.flush():
public void flush() throws IOException {
if(this.closedFlag) {
throw new IOException("Cannot flush a closed output stream");
Producing the Message
INFO | jvm 1 | 2016/04/09 14:05:06.673 | SEVERE: Servlet.service() for servlet dispatcherServlet threw exception
INFO | jvm 1 | 2016/04/09 14:05:06.673 | java.io.IOException: Cannot flush a closed output stream
INFO | jvm 1 | 2016/04/09 14:05:06.673 | at de.hybris.platform.util.GZIPResponseStream.flush(GZIPResponseStream.java:70)
INFO | jvm 1 | 2016/04/09 14:05:06.673 | at org.springframework.http.converter.AbstractHttpMessageConverter.write(AbstractHttpMessageConverter.java:182)
INFO | jvm 1 | 2016/04/09 14:05:06.673 | at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter$ServletHandlerMethodInvoker.writeWithMessageConverters(AnnotationMethodHandlerAdapter.java:975)
is there an Error in the Framework, or do I miss someting. Is there another reaseon for this error? (AND PROB. A FIX?)

Spark on Yarn job failed with ExitCode:1 and stderr says "Can't find main class"

We tried to submit a simple SparkPI example onto Spark on Yarn. The bat is written as below:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --num-executors 3 --driver-memory 4g --executor-memory 1g --executor-cores 1 .\examples\target\spark-examples_2.10-1.4.0.jar 10
pause
Our HDFS and Yarn works well. We are using Hadoop 2.7.0 and Spark 1.4.1. We have only 1 node that acts as both NameNode and DataNode.
When we execute it, it fails with log says the following:
2015-08-21 11:07:22,044 DEBUG [main] | ===============================================================================
2015-08-21 11:07:22,044 DEBUG [main] | Yarn AM launch context:
2015-08-21 11:07:22,044 DEBUG [main] | user class: org.apache.spark.examples.SparkPi
2015-08-21 11:07:22,044 DEBUG [main] | env:
2015-08-21 11:07:22,044 DEBUG [main] | CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__hadoop_conf__<CPS>{{PWD}}/__spark__.jar<CPS>%HADOOP_HOME%\etc\hadoop<CPS>%HADOOP_HOME%\share\hadoop\common\*<CPS>%HADOOP_HOME%\share\hadoop\common\lib\*<CPS>%HADOOP_HOME%\share\hadoop\mapreduce\*<CPS>%HADOOP_HOME%\share\hadoop\mapreduce\lib\*<CPS>%HADOOP_HOME%\share\hadoop\hdfs\*<CPS>%HADOOP_HOME%\share\hadoop\hdfs\lib\*<CPS>%HADOOP_HOME%\share\hadoop\yarn\*<CPS>%HADOOP_HOME%\share\hadoop\yarn\lib\*<CPS>%HADOOP_MAPRED_HOME%\share\hadoop\mapreduce\*<CPS>%HADOOP_MAPRED_HOME%\share\hadoop\mapreduce\lib\*
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_CACHE_FILES_FILE_SIZES -> 165181064,1420218
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1440062075415_0026
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_USER -> msrabi
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_MODE -> true
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1440126441200,1440126441575
2015-08-21 11:07:22,060 DEBUG [main] | SPARK_YARN_CACHE_FILES -> hdfs://msra-sa-44:9000/user/msrabi/.sparkStaging/application_1440062075415_0026/spark-assembly-1.4.0-hadoop2.7.0.jar#__spark__.jar,hdfs://msra-sa-44:9000/user/msrabi/.sparkStaging/application_1440062075415_0026/spark-examples_2.10-1.4.0.jar#__app__.jar
2015-08-21 11:07:22,060 DEBUG [main] | resources:
2015-08-21 11:07:22,060 DEBUG [main] | __app__.jar -> resource { scheme: "hdfs" host: "msra-sa-44" port: 9000 file: "/user/msrabi/.sparkStaging/application_1440062075415_0026/spark-examples_2.10-1.4.0.jar" } size: 1420218 timestamp: 1440126441575 type: FILE visibility: PRIVATE
2015-08-21 11:07:22,060 DEBUG [main] | __spark__.jar -> resource { scheme: "hdfs" host: "msra-sa-44" port: 9000 file: "/user/msrabi/.sparkStaging/application_1440062075415_0026/spark-assembly-1.4.0-hadoop2.7.0.jar" } size: 165181064 timestamp: 1440126441200 type: FILE visibility: PRIVATE
2015-08-21 11:07:22,060 DEBUG [main] | __hadoop_conf__ -> resource { scheme: "hdfs" host: "msra-sa-44" port: 9000 file: "/user/msrabi/.sparkStaging/application_1440062075415_0026/__hadoop_conf__7908628615251032149.zip" } size: 82888 timestamp: 1440126441794 type: ARCHIVE visibility: PRIVATE
2015-08-21 11:07:22,060 DEBUG [main] | command:
2015-08-21 11:07:22,075 DEBUG [main] | {{JAVA_HOME}}/bin/java -server -Xmx4096m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.app.name=org.apache.spark.examples.SparkPi' '-Dspark.executor.memory=1g' '-Dspark.driver.memory=4g' '-Dspark.master=yarn-cluster' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.SparkPi' --jar file:/D:/sp/./examples/target/spark-examples_2.10-1.4.0.jar --arg '10' --executor-memory 1024m --executor-cores 1 --num-executors 3 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
2015-08-21 11:07:22,075 DEBUG [main] | ===============================================================================
...........(omitting some lines)......
2015-08-21 11:07:23,231 INFO [main] | Application report for application_1440062075415_0026 (state: ACCEPTED)
2015-08-21 11:07:23,247 DEBUG [main] |
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1440126442169
final status: UNDEFINED
tracking URL: http://msra-sa-44:8088/proxy/application_1440062075415_0026/
user: msrabi
2015-08-21 11:07:24,263 TRACE [main] | 1: Call -> MSRA-SA-44/10.190.173.181:8032: getApplicationReport {application_id { id: 26 cluster_timestamp: 1440062075415 }}
2015-08-21 11:07:24,263 DEBUG [IPC Parameter Sending Thread #0] | IPC Client (443384617) connection to MSRA-SA-44/10.190.173.181:8032 from msrabi sending #37
2015-08-21 11:07:24,263 DEBUG [IPC Client (443384617) connection to MSRA-SA-44/10.190.173.181:8032 from msrabi] | IPC Client (443384617) connection to MSRA-SA-44/10.190.173.181:8032 from msrabi got value #37
2015-08-21 11:07:24,263 DEBUG [main] | Call: getApplicationReport took 0ms
2015-08-21 11:07:24,263 TRACE [main] | 1: Response <- MSRA-SA-44/10.190.173.181:8032: getApplicationReport {application_report { applicationId { id: 26 cluster_timestamp: 1440062075415 } user: "msrabi" queue: "default" name: "org.apache.spark.examples.SparkPi" host: "N/A" rpc_port: -1 yarn_application_state: ACCEPTED trackingUrl: "http://msra-sa-44:8088/proxy/application_1440062075415_0026/" diagnostics: "" startTime: 1440126442169 finishTime: 0 final_application_status: APP_UNDEFINED app_resource_Usage { num_used_containers: 1 num_reserved_containers: 0 used_resources { memory: 4608 virtual_cores: 1 } reserved_resources { memory: 0 virtual_cores: 0 } needed_resources { memory: 4608 virtual_cores: 1 } memory_seconds: 0 vcore_seconds: 0 } originalTrackingUrl: "N/A" currentApplicationAttemptId { application_id { id: 26 cluster_timestamp: 1440062075415 } attemptId: 1 } progress: 0.0 applicationType: "SPARK" }}
2015-08-21 11:07:24,263 INFO [main] | Application report for application_1440062075415_0026 (state: ACCEPTED)
.......(omitting some lines where the state are all ACCEPTED and final status are all UNDEFINED).....
2015-08-21 11:07:30,359 INFO [main] | Application report for application_1440062075415_0026 (state: FAILED)
2015-08-21 11:07:30,359 DEBUG [main] |
client token: N/A
diagnostics: Application application_1440062075415_0026 failed 2 times due to AM Container for appattempt_1440062075415_0026_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://msra-sa-44:8088/cluster/app/application_1440062075415_0026Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1440062075415_0026_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Shell output: 1 file(s) moved.
And then we opened stderr, it says:
Error: Could not find or load main class 'Dspark.app.name=org.apache.spark.examples.SparkPi'
It's so strange, this should be a parameter passed to java, and it seems that java recognized it as the main class. There should be a main class parameter in the command section of the log, but there is not.
How can that happen? What should we do to know what's wrong with it?
Thank you!
We solved this problem.
The root cause is that when generating the java command line, our Spark uses single quote('-Dxxxx') to wrap the parameters. Single quote works only in Linux. On Windows, the parameters are either not wrapped, or wrapped with double quotes("-Dxxxx"). The only way to solve this is to edit the source code of Spark and re-compile it.
It seems that this is currently an issue of Spark. (https://issues.apache.org/jira/browse/SPARK-5754)

Spring Data / JPA / Hibernate : Not committing changes

I'm trying to execute a query which has to delete items from a sub select.
Environment :
Wildfly 8.2
Spring Framework 4.1.6
Spring Data 1.8.0
Hibernate 4.3.10
MySQL 5
Here's the repository :
public interface MySecondEntityRepository extends JpaRepository<MySecondEntity, UUID> {
#Modifying
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Query("DELETE FROM MySecondEntity e WHERE e.uuid IN ("
+ "SELECT e2.uuid "
+ "FROM MySecondEntity e2, MyEntity e1 "
+ "WHERE e2.entityUUID = e1.uuid "
+ "AND e1.uuid = :uuid "
+ "AND e1.*** = ..."
+ "AND e1.*** = ..."
+ ")")
public void deleteByEntityUUID(#Param("uuid") UUID entityUUID, ...);
}
Here's the transaction manager's XML definition :
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
where entityManagerFactory is a LocalContainerEntityManagerFactoryBean.
I'm using them not to be dependent of the provider.
In the logs, I can see :
SQL | insert into HT_my_entity select ...
SQL | delete from my_entity where uuid IN (select uuid FROM HT_my_entity);
When I execute manually the select ..., I get the correct uuids to be deleted. but when spring/hibernate execute them, the transaction does not commit even with Propagation.REQUIRES_NEW. No exception thrown.
Full logs :
14:41:45,073 | DEBUG | AnnotationTransactionAttri:108 | Adding transactional method 'DefaultJpaRepositoryImpl.deleteByEntityUUID' with attribute: PROPAGATION_REQUIRES_NEW,ISOLATION_DEFAULT; ''
14:41:45,073 | DEBUG | JpaTransactionManager :367 | Creating new transaction with name [xxx.spring.jpa.factory.DefaultJpaRepositoryImpl.deleteByEntityUUID]: PROPAGATION_REQUIRES_NEW,ISOLATION_DEFAULT; ''
14:41:45,074 | DEBUG | JpaTransactionManager :371 | Opened new EntityManager [org.hibernate.jpa.internal.EntityManagerImpl#cd414] for JPA transaction
14:41:45,075 | DEBUG | AbstractTransactionImpl :160 | begin
14:41:45,075 | DEBUG | LogicalConnectionImpl :226 | Obtaining JDBC connection
14:41:45,075 | DEBUG | LogicalConnectionImpl :232 | Obtained JDBC connection
14:41:45,076 | DEBUG | JdbcTransaction :69 | initial autocommit status: true
14:41:45,076 | DEBUG | JdbcTransaction :71 | disabling autocommit
14:41:45,076 | DEBUG | JpaTransactionManager :403 | Exposing JPA transaction as JDBC transaction [org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle#7d7135]
14:41:45,078 | DEBUG | JpaTransactionManager :334 | Found thread-bound EntityManager [org.hibernate.jpa.internal.EntityManagerImpl#cd414] for JPA transaction
14:41:45,079 | DEBUG | JpaTransactionManager :417 | Suspending current transaction, creating new transaction with name [xxx.spring.jpa.factory.DefaultJpaRepositoryImpl.deleteByEntityUUID]
14:41:45,082 | DEBUG | JpaTransactionManager :371 | Opened new EntityManager [org.hibernate.jpa.internal.EntityManagerImpl#35e880] for JPA transaction
14:41:45,082 | DEBUG | AbstractTransactionImpl :160 | begin
14:41:45,082 | DEBUG | LogicalConnectionImpl :226 | Obtaining JDBC connection
14:41:45,083 | DEBUG | LogicalConnectionImpl :232 | Obtained JDBC connection
14:41:45,083 | DEBUG | JdbcTransaction :69 | initial autocommit status: true
14:41:45,083 | DEBUG | JdbcTransaction :71 | disabling autocommit
14:41:45,084 | DEBUG | JpaTransactionManager :403 | Exposing JPA transaction as JDBC transaction [org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle#1b46a77]
14:41:45,223 | DEBUG | SQL :109 | insert into HT_my_entity select my_second_0_.uuid as uuid from my_second_entity my_second_0_ where my_second_0_.uuid in (select my_second_1_.uuid from my_second_entity my_second_1_ cross join my_entity my_entity_2_ where my_second_1_.entity_uuid=my_entity_2_.uuid and my_entity_2_.uuid=?)
14:41:45,232 | DEBUG | SQL :109 | delete from my_entity where uuid IN (select uuid FROM HT_my_entity);
14:41:45,333 | DEBUG | JpaTransactionManager :755 | Initiating transaction commit
14:41:45,334 | DEBUG | JpaTransactionManager :512 | Committing JPA transaction on EntityManager [org.hibernate.jpa.internal.EntityManagerImpl#35e880]
14:41:45,335 | DEBUG | AbstractTransactionImpl :175 | committing
14:41:45,388 | DEBUG | JdbcTransaction :113 | committed JDBC Connection
14:41:45,389 | DEBUG | JdbcTransaction :126 | re-enabling autocommit
14:41:45,393 | DEBUG | JpaTransactionManager :600 | Closing JPA EntityManager [org.hibernate.jpa.internal.EntityManagerImpl#35e880] after transaction
14:41:45,394 | DEBUG | EntityManagerFactoryUtils :432 | Closing JPA EntityManager
14:41:45,396 | DEBUG | LogicalConnectionImpl :246 | Releasing JDBC connection
14:41:45,397 | DEBUG | LogicalConnectionImpl :264 | Released JDBC connection
What I tried :
Propagation.REQUIRED and Propagation.REQUIRES_NEW
.flush() after the method's call
Unwrap the subquery to test it. It works correctly.
Update Spring Data / Hibernate
Note :
I do not include MySecondEntity in MyEntity for performance reason. A MyEntity can have 10k+ MySecondEntity. I know I can use lazy fetching but actually, entities are mapped to data transfert object and this cause issues to mapping process. Also, I do not want the retrieve all the data to delete them.
<tx:annotation-driven/> is defined
<aop:aspectj-autoproxy/> is defined
The method is called directly from a #RestController.

Resources