While trying to update some fields in the DB using repo.save() method it is not updating the DB. Though it is returning the modified entity as expected. And not showing any error in the console.
For information I have added #Transactional annotation in the method, but not changing the DB.
And while I am trying to update through a native query getting error though I added #transaction:
org.springframework.dao.InvalidDataAccessApiUsageException: Executing an update/delete query; nested exception is javax.persistence.TransactionRequiredException: Executing an update/delete query
at org.springframework.orm.jpa.EntityManagerFactoryUtils.convertJpaAccessExceptionIfPossible(EntityManagerFactoryUtils.java:403) ~[spring-orm-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:257) ~[spring-orm-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.translateExceptionIfPossible(AbstractEntityManagerFactoryBean.java:528) ~[spring-orm-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.dao.support.ChainedPersistenceExceptionTranslator.translateExceptionIfPossible(ChainedPersistenceExceptionTranslator.java:61) ~[spring-tx-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.dao.support.DataAccessUtils.translateIfNecessary(DataAccessUtils.java:242) ~[spring-tx-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:154) ~[spring-tx-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:149) ~[spring-data-jpa-2.3.5.RELEASE.jar:2.3.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95) ~[spring-aop-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.2.10.RELEASE.jar:5.2.10.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) ~[spring-aop-5.2.10.RELEASE.jar:5.2.10.RELEASE]
What is wrong with update? Need help
Related
I am working on the spring batch application which inserts the new entry into BATCH_JOB_INSTANCE internally. Version of spring is 2.2.1 and database is Azure.
While running the job I am getting the below error. As suggested in one of the site, I enabled the SET IDENTITY_INSERT to ON even though I have not used any INSERT statements to BATCH_JOB_INSTANCE. But it is of no use.
Caused by: org.springframework.dao.DataIntegrityViolationException: PreparedStatementCallback; SQL [INSERT into BATCH_JOB_INSTANCE(JOB_INSTANCE_ID, JOB_NAME, JOB_KEY, VERSION) values (?, ?, ?, ?)]; Cannot insert explicit value for identity column in table 'BATCH_JOB_INSTANCE' when IDENTITY_INSERT is set to OFF.; nested exception is com.microsoft.sqlserver.jdbc.SQLServerException: Cannot insert explicit value for identity column in table 'BATCH_JOB_INSTANCE' when IDENTITY_INSERT is set to OFF.
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:247) ~[spring-jdbc-5.2.1.RELEASE.jar:5.2.1.RELEASE]
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72) ~[spring-jdbc-5.2.1.RELEASE.jar:5.2.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1443) ~[spring-jdbc-5.2.1.RELEASE.jar:5.2.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:633) ~[spring-jdbc-5.2.1.RELEASE.jar:5.2.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:862) ~[spring-jdbc-5.2.1.RELEASE.jar:5.2.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:917) ~[spring-jdbc-5.2.1.RELEASE.jar:5.2.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:922) ~[spring-jdbc-5.2.1.RELEASE.jar:5.2.1.RELEASE]
at org.springframework.batch.core.repository.dao.JdbcJobInstanceDao.createJobInstance(JdbcJobInstanceDao.java:120) ~[spring-batch-core-4.2.0.RELEASE.jar:4.2.0.RELEASE]
at org.springframework.batch.core.repository.support.SimpleJobRepository.createJobExecution(SimpleJobRepository.java:140) ~[spring-batch-core-4.2.0.RELEASE.jar:4.2.0.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_144]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_144]
It would be better if you post your Job configuration...
DataIntegrityViolationException clearly states that an attempt to insert or update data results in violation of an integrity constraint.
DataIntegrityViolationException Docs
While job configuration jobBuilderFactory.get(JOB_NAME).incrementer(new RunIdIncrementer())
to increment the run id and every run gets the unique id
I have a Jhipster Spring boot in production and after a while it gives this error:
SQL: delete from jhi_persistent_audit_event where event_id=?
.
HHH000315: Exception executing batch [org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1], SQL: delete from jhi_persistent_audit_event where event_id=?
2020-03-01 12:00:00.132 ERROR 14354 --- [ms-scheduling-1] o.h.i.ExceptionMapperStandardImpl : HHH000346: Error during managed flush [Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1]
2020-03-01 12:00:00.137 ERROR 14354 --- [ms-scheduling-1] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
Unexpected error occurred in scheduled task.
org.springframework.orm.ObjectOptimisticLockingFailureException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; nested exception is org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:339)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:254)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:537)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:746)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:714)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:534)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:305)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)
at com.gotop.nms.service.AuditEventService$$EnhancerBySpringCGLIB$$3c01613a.removeOldAuditEvents(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:67)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:54)
at org.hibernate.engine.jdbc.batch.internal.BatchingBatch.checkRowCounts(BatchingBatch.java:149)
at org.hibernate.engine.jdbc.batch.internal.BatchingBatch.performExecution(BatchingBatch.java:124)
at org.hibernate.engine.jdbc.batch.internal.BatchingBatch.addToBatch(BatchingBatch.java:89)
at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:3498)
at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:3755)
at org.hibernate.action.internal.EntityDeleteAction.execute(EntityDeleteAction.java:99)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:604)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:478)
at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:356)
at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:39)
at org.hibernate.internal.SessionImpl.doFlush(SessionImpl.java:1483)
at org.hibernate.internal.SessionImpl.managedFlush(SessionImpl.java:512)
at org.hibernate.internal.SessionImpl.flushBeforeTransactionCompletion(SessionImpl.java:3321)
at org.hibernate.internal.SessionImpl.beforeTransactionCompletion(SessionImpl.java:2517)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.beforeTransactionCompletion(JdbcCoordinatorImpl.java:447)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.beforeCompletionCallback(JdbcResourceLocalTransactionCoordinatorImpl.java:178)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.access$300(JdbcResourceLocalTransactionCoordinatorImpl.java:39)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.commit(JdbcResourceLocalTransactionCoordinatorImpl.java:271)
at org.hibernate.engine.transaction.internal.TransactionImpl.commit(TransactionImpl.java:104)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:533)
... 22 common frames omitted
The database is Mysql
This exception maybe can happen while deleting a record by Id that does not exists at all. So how do I fix this is JHipster.
Where do you think this SQL is called ?
This occurs in removeOldAuditEvents method of AuditEventService which is annotated with #Transactionnal at class level.
This method is #Scheduled annotated and you have multiple instances of your app running. So, each day at same hour all your instances compete to purge events older than 30 days.
This is a classical case of batch jobs in multiple instances apps.
So, you have several alternatives:
select an instance responsible for purging events maybe with a spring profile
externalize the scheduling by exposing your purge method as an API endpoint (see AuditResource) correctly secured that you will call from a cron or any external scheduler and using an API gateway to route to only one instance
catch ObjectOptimisticLockingFailureException and ignore it in this method; in general it's not recommended but in this case I guess it is acceptable because one instance will succeed and this is what you want. Maybe configuring pessimistic locking would make sense.
implement a distributed lock either in database or in Hazelcast that you might already use for distributed caching
I am getting this error when attempting to stream an RDS MySQL table into Redshift: Error converting data, invalid type for parameter
The problem field is a DATETIME in MySQL and timestamp without time zone in Redshift (same happens for timestamp with time zone). Note: pipeline was working fine until I populated the date field.
We are using Debezium as the Kafka Connect source for getting data from RDS into Kafka. And the JDBC sink connector with Redshift JDBC driver for the sink.
Also... I am able to get the data flowing if I make the Redshift field a varchar or a bigint. When I do this, I see that the data is coming across as a unix epoch integer in ms. But we'd really like a timestamp!
Error message in context:
2018-10-18 22:48:32,972 DEBUG || INSERT sql: INSERT INTO "funschema"."test_table"("user_id","subscription_code","source","receipt","starts_on") VALUES(?,?,?,?,?) [io.confluent.connect.jdbc.sink.BufferedRecords]
2018-10-18 22:48:32,987 WARN || Write of 28 records failed, remainingRetries=7 [io.confluent.connect.jdbc.sink.JdbcSinkTask]
java.sql.BatchUpdateException: [Amazon][JDBC](10120) Error converting data, invalid type for parameter: 5.
at com.amazon.jdbc.common.SStatement.createBatchUpdateException(Unknown Source)
at com.amazon.jdbc.common.SStatement.access$100(Unknown Source)
at com.amazon.jdbc.common.SStatement$BatchExecutionContext.createBatchUpdateException(Unknown Source)
at com.amazon.jdbc.common.SStatement$BatchExecutionContext.createResults(Unknown Source)
at com.amazon.jdbc.common.SStatement$BatchExecutionContext.doProcess(Unknown Source)
at com.amazon.jdbc.common.SStatement$BatchExecutionContext.processInt(Unknown Source)
at com.amazon.jdbc.common.SStatement.processBatchResults(Unknown Source)
at com.amazon.jdbc.common.SPreparedStatement.executeBatch(Unknown Source)
at io.confluent.connect.jdbc.sink.BufferedRecords.flush(BufferedRecords.java:138)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:75)
Thanks,
Tom
I was doing the Hadoop(2.6.0) twitter example by Flume(1.5.2) and Hive(0.14.0). I got data from twitter successfully via Flume and stored them to the my own hdfs.
But when I wanted to use hive to deal with these data to do some analyzing (only select one field from a table), the "Failed with exception java.io.IOException:org.apache.avro.AvroRuntimeException: java.io.EOFException" exception happened and little useful information I could find related to this exception.
Actuall I can fetch most records of a file successfully (like the information below, I fetched 5100 rows successfully) but it would fail in the end. As a result I cannot deal with all the tweets files together.
Time taken: 1.512 seconds, Fetched: 5100 row(s)
Failed with exception java.io.IOException:org.apache.avro.AvroRuntimeException: java.io.EOFException
15/04/15 19:59:18 [main]: ERROR CliDriver: Failed with exception java.io.IOException:org.apache.avro.AvroRuntimeException: java.io.EOFException
java.io.IOException: org.apache.avro.AvroRuntimeException: java.io.EOFException
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:663)
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:561)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:138)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1621)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:267)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.avro.AvroRuntimeException: java.io.EOFException
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:222)
at org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader.next(AvroGenericRecordReader.java:153)
at org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader.next(AvroGenericRecordReader.java:52)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:629)
... 15 more
Caused by: java.io.EOFException
at org.apache.avro.io.BinaryDecoder.ensureBounds(BinaryDecoder.java:473)
at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:128)
at org.apache.avro.io.BinaryDecoder.readString(BinaryDecoder.java:259)
at org.apache.avro.io.ValidatingDecoder.readString(ValidatingDecoder.java:107)
at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:348)
at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:341)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:154)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:177)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:148)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:139)
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:233)
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:220)
... 18 more
I use the hql below to create a table
CREATE TABLE tweets
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES ('avro.schema.url'='file:///home/hduser/hive-0.14.0-bin/tweetsdoc_new.avsc');
then load tweets file from hdfs
LOAD DATA INPATH '/user/flume/tweets/FlumeData.1429098355304' OVERWRITE INTO TABLE tweets;
Could anyone tell me the possible reason, or an effective way to find more details of the exception?
I had this annoying problem as well.
I looked at the produced binary file and debugged Avro deserialization of bits.
The reason for this EOFException was that Flume inserts new line character byte after every event (you can notice 0x0A after every record).
Avro deserializer thinks the file hasn't finished and interprets that character as some number of blocks to read, but then can't read out that number of blocks without hitting EOF.
I'm trying to get some data from a remote Oracle Database,
so I configured a new connection to the database and whn I press Test, it says that the connection was established succesfully, but when i tried a simple select query, Report Designer gives me an error:
org.pentaho.reporting.engine.classic.core.ReportDataFactoryException: Failed at query: select * from fact_table;
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.queryData(SimpleSQLReportDataFactory.java:258)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SQLReportDataFactory.queryData(SQLReportDataFactory.java:171)
at org.pentaho.reporting.ui.datasources.jdbc.ui.JdbcPreviewWorker.run(JdbcPreviewWorker.java:103)
at java.lang.Thread.run(Unknown Source)
ParentException:
java.sql.SQLException: ORA-00911: invalid character
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:305)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:272)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:623)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:112)
at oracle.jdbc.driver.T4CStatement.execute_for_describe(T4CStatement.java:351)
at oracle.jdbc.driver.OracleStatement.execute_maybe_describe(OracleStatement.java:896)
at oracle.jdbc.driver.T4CStatement.execute_maybe_describe(T4CStatement.java:383)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:986)
at oracle.jdbc.driver.OracleStatement.doScrollExecuteCommon(OracleStatement.java:3763)
at oracle.jdbc.driver.OracleStatement.doScrollStmtExecuteQuery(OracleStatement.java:3887)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1131)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.parametrizeAndQuery(SimpleSQLReportDataFactory.java:422)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.queryData(SimpleSQLReportDataFactory.java:254)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SQLReportDataFactory.queryData(SQLReportDataFactory.java:171)
at org.pentaho.reporting.ui.datasources.jdbc.ui.JdbcPreviewWorker.run(JdbcPreviewWorker.java:103)
at java.lang.Thread.run(Unknown Source)
So how can I get this done?
select * from fact_table; seems to be a valid query. Try to remove the semicolon at the end.