When trying to run a flume job I am getting the error given below. I am running this on a cloudera setup.
Kafka is the source
Morphline is used as an interceptor with avro records getting created from it.
Sink is HDFS
The exact same files (morphline, avro schema etc., flume config) on a test environment. But in another environment it throws this error.
2019-07-15 14:24:17,669 WARN org.apache.flume.sink.hdfs.BucketWriter: Caught IOException writing to HDFSWriter (no protocol: value). Closing file (hdfs://8.8.8.8:8020/user/hive/warehouse/folder/folder/FlumeData.1563162656585.tmp) and rethrowing exception.
2019-07-15 14:24:17,670 INFO org.apache.flume.sink.hdfs.BucketWriter: Closing hdfs://8.8.8.8:8020/user/hive/warehouse/folder/folder/FlumeData.1563162656585.tmp
2019-07-15 14:24:17,670 ERROR org.apache.flume.sink.hdfs.HDFSEventSink: process failed
java.lang.NullPointerException
at org.apache.flume.sink.hdfs.AvroEventSerializer.flush(AvroEventSerializer.java:187)
at org.apache.flume.sink.hdfs.HDFSDataStream.close(HDFSDataStream.java:131)
at org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:327)
at org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:323)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:701)
at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:698)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2019-07-15 14:24:17,671 ERROR org.apache.flume.SinkRunner: Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: java.lang.NullPointerException
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:451)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.flume.sink.hdfs.AvroEventSerializer.flush(AvroEventSerializer.java:187)
at org.apache.flume.sink.hdfs.HDFSDataStream.close(HDFSDataStream.java:131)
at org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:327)
at org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:323)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:701)
at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:698)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
I was able to locate the relevant code on flume:
https://github.com/apache/flume/blob/trunk/flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/BucketWriter.java (line:602)
// write the event
try {
sinkCounter.incrementEventDrainAttemptCount();
callWithTimeout(new CallRunner<Void>() {
#Override
public Void call() throws Exception {
writer.append(event); // could block
return null;
}
});
} catch (IOException e) {
LOG.warn("Caught IOException writing to HDFSWriter ({}). Closing file (" +
bucketPath + ") and rethrowing exception.",
e.getMessage());
close(true);
throw e;
}
Error: Caught IOException writing to HDFSWriter (no protocol: value). Closing file
I am not able to workout what the error no protocol: value means.
I am unable to find any reference to this error in any context related to Flume and HDFS.
Incerceptor protocol was missing from the configuration -- Added "file:/" in the flume configuration file which that fixed the issue.
Similar issue reference : https://community.cloudera.com/t5/Data-Ingestion-Integration/Flume-HDFS-sink-error-quot-unknown-protocol-hdfs-quot/td-p/19344
Related
I'm running a simple etcd connection with springbok example. I keep getting an error w.r.t grcp in both my spring boot logs and server logs. How can I fix this?
#Scheduled(fixedRate = 5000)
public void test() throws EtcdException, ExecutionException, InterruptedException {
// create client
Client client = Client.builder().endpoints("http://localhost:2379").build();
KV kvClient = client.getKVClient();
ByteSequence key = ByteSequence.fromBytes("test_key".getBytes());
ByteSequence val = ByteSequence.fromBytes("test_val".getBytes());
// put the key-value
kvClient.put(key, val);
// get the CompletableFuture
CompletableFuture<GetResponse> getFuture = kvClient.get(key);
// get the value from CompletableFuture
GetResponse response = getFuture.get();
System.out.println(response.toString());
}
Error
.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: INTERNAL: Connection closed with unknown cause
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:476) ~[guava-19.0.jar:na]
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:455) ~[guava-19.0.jar:na]
at com.coreos.jetcd.internal.impl.Util.lambda$toCompletableFutureWithRetry$1(Util.java:125) ~[jetcd-core-0.0.1.jar:na]
... 3 common frames omitted
Caused by: io.grpc.StatusRuntimeException: INTERNAL: Connection closed with unknown cause
at io.grpc.Status.asRuntimeException(Status.java:526) ~[grpc-core-1.5.0.jar:1.5.0]
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:427) ~[grpc-stub-1.5.0.jar:1.5.0]
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41) ~[grpc-core-1.5.0.jar:1.5.0]
at com.coreos.jetcd.internal.impl.ClientConnectionManager$AuthTokenInterceptor$1$1.onClose(ClientConnectionManager.java:267) ~[jetcd-core-0.0.1.jar:na]
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:419) ~[grpc-core-1.5.0.jar:1.5.0]
at io.grpc.internal.ClientCallImpl.access$100(ClientCallImpl.java:60) ~[grpc-core-1.5.0.jar:1.5.0]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:493) ~[grpc-core-1.5.0.jar:1.5.0]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$500(ClientCallImpl.java:422) ~[grpc-core-1.5.0.jar:1.5.0]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:525) ~[grpc-core-1.5.0.jar:1.5.0]
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) ~[grpc-core-1.5.0.jar:1.5.0]
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:102) ~[grpc-core-1.5.0.jar:1.5.0]
... 3 common frames omitted
At ETCD server
WARNING: 2020/04/16 00:23:15 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Tried setting the keep alive property but didn't work
grpc.client.GLOBAL.keep-alive-time=5000
So it turns out that the grcp version was the reason, I was using verion 1.5.0 but when I moved to version 1.9.0, the issue was resolved for me.
I have two services: A and B. B makes a request via feign client when starts.
But when A is unavailable I get com.netflix.client.ClientException
Caused by: com.netflix.client.ClientException: Load balancer does not have available server for client: A
I'm looking for the best practice for handling such an exception
right now there is no official way to catch FeignClient exception. but you can handle FeignClient exception by catching as a java.lang.Exception and throw your own exception.
for ex:
try{
feignClient.feignMethod();
} catch(Exception ex){
//throw your own exception
throw new CustomFeignException();
}
I am writing a Camel app that reads LCR's from an Oracle oaq ("ANYDATA") queue. We are not using Weblogic, and are running under Spring Boot. Right now I'm trying to create a minimal app that reads the LCR's and dumps them to a log. We are getting an exception (full trace shown at bottom) that says:
c.c.j.DefaultJmsMessageListenerContainer : Setup of JMS message listener invoker failed for destination 'AQADMIN.MULEBUS_ANYDATA_Q' - trying to recover. Cause: JMS-120: Dequeue failed; nested exception is oracle.jms.AQjmsException: JMS-224: Typemap is invalid - must be populated with SQLType/OraDataFactory mappings to receive messages from Sys.AnyData destinations
I think I need to grab the session and populate the type map. The LCR's are in XML format; from 1 example on the web I saw I could probably do something like this:
((AQjmsSession) session).getTypeMap().put(("SYS.XMLTYPE", Class.forName("oracle.xdb.XMLType"));
.. but I am a Camel newbie (not to mention an ANYDATA newbie), and am not sure whether that will work or how to do it. Also wondering whether I am using the wrong driver (I see stuff indicating the thin driver (which I am using), other stuff pointing to oci. Note that we're connecting, as nothing happens till we generate an lcr.
Any pointers on how to set up a minimal app to read the lcr in Camel and dump it to a log would be most gratefully appreciated.
The route looks like this currently:
from("oracleAQ:{{xyz.oracleaq.datasource.srcaq}}")
.to("log:out");
.. and the referenced component is:
#Qualifier("oracleAQ")
#Bean
public Object oracleAQ() throws JMSException {
LOG.info("OracleAQ startup - url={}, user={}", url, username);
try {
QueueConnectionFactory f = AQjmsFactory.getQueueConnectionFactory(url, new Properties());
UserCredentialsConnectionFactoryAdapter adapter = new UserCredentialsConnectionFactoryAdapter();
adapter.setTargetConnectionFactory(f);
adapter.setUsername(username);
adapter.setPassword(password);
return JmsComponent.jmsComponent(adapter);
} catch (JMSException jmse) {
LOG.error("Error starting OracleAQ component: {}", jmse);
throw jmse;
}
}
TIA.
Stack trace follows:
2016-06-06 11:12:58.632 INFO 110904 --- [EBUS_ANYDATA_Q]] c.c.j.DefaultJmsMessageListenerContainer : Successfully refreshed JMS Connection
2016-06-06 11:13:03.682 WARN 110904 --- [EBUS_ANYDATA_Q]] c.c.j.DefaultJmsMessageListenerContainer : Setup of JMS message listener invoker failed for destination 'AQADMIN.MULEBUS_ANYDATA_Q' - trying to recover. Cause: JMS-120: Dequeue failed; nested exception is oracle.jms.AQjmsException: JMS-224: Typemap is invalid - must be populated with SQLType/OraDataFactory mappings to receive messages from Sys.AnyData destinations
oracle.jms.AQjmsException: JMS-120: Dequeue failed
at oracle.jms.AQjmsError.throwEx(AQjmsError.java:315) ~[aqapi-11.2.0.3.jar:na]
at oracle.jms.AQjmsConsumer.jdbcDequeue(AQjmsConsumer.java:1626) ~[aqapi-11.2.0.3.jar:na]
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1035) ~[aqapi-11.2.0.3.jar:na]
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:960) ~[aqapi-11.2.0.3.jar:na]
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:938) ~[aqapi-11.2.0.3.jar:na]
at oracle.jms.AQjmsConsumer.receive(AQjmsConsumer.java:790) ~[aqapi-11.2.0.3.jar:na]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveMessage(AbstractPollingMessageListenerContainer.java:420) ~[spring-jms-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:300) ~[spring-jms-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:253) ~[spring-jms-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1164) ~[spring-jms-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1156) ~[spring-jms-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1053) ~[spring-jms-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
Caused by: oracle.jms.AQjmsException: JMS-224: Typemap is invalid - must be populated with SQLType/OraDataFactory mappings to receive messages from Sys.AnyData destinations
at oracle.jms.AQjmsError.throwEx(AQjmsError.java:292) ~[aqapi-11.2.0.3.jar:na]
at oracle.jms.AQjmsConsumer.convertAnydataToMessage(AQjmsConsumer.java:1791) ~[aqapi-11.2.0.3.jar:na]
at oracle.jms.AQjmsConsumer.jdbcDequeue(AQjmsConsumer.java:1437) ~[aqapi-11.2.0.3.jar:na]
... 13 common frames omitted
i am making connection with hive using java code but i am getting below error -
log4j:WARN No appenders could be found for logger (org.apache.thrift.transport.TSaslTransport).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=anonymous, access=WRITE, inode="/":oodles:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5904)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5886)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5860)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3793)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3763)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3737)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:778)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:573)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
at com.oodles.example.HiveJdbcClient.main(HiveJdbcClient.java:23)
My Java code is below
package com.oodles.example;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class HiveJdbcClient {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
public static void main(String[] args) throws SQLException {
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
Connection con = DriverManager.getConnection("jdbc:hive2://localhost:10000/default", "", "");
Statement stmt = con.createStatement();
String tableName = "testHiveDriverTable";
stmt.execute("drop table if exists " + tableName);
stmt.execute("create table " + tableName + " (key int, value string)");
System.out.println("success!");
stmt.close();
con.close();
}
}
and my other concern is that , whenever i make connection without starting hadoop services it gives error
log4j:WARN No appenders could be found for logger (org.apache.thrift.transport.TSaslTransport).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: java.net.ConnectException Call From oodles-Latitude-3540/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
at com.oodles.example.HiveJdbcClient.main(HiveJdbcClient.java:21)
later issue gets resolved if i start hadoop services ,so i want to ask is it mandatory to start hadoop services in order to make connection with hive?
Since you have not mentioned what Hive version you are using, but base on the Driver name and connection URl I am assuming you are using Hive 0.11 or above.
So in Hive 0.11 or above you need to mention a username in the Connection URL:
DriverManager.getConnection("jdbc:hive2://localhost:10000/default", <user_name>, "")
NOTE: This user should have read+write permissions in HDFS.
Regarding you second query:
I am quite sure that Hadoop services are not required just for connection. I have never tried that.
Its my assumption, since we need to mention a database in the connection URL , which is a directory in HDFS. So it might need NAMENODE service to check the existence of that directory.
Hope it helps...!!!
I am using spring and hibernate to work on multiple database server save,update simultaneously using 2 datasource configuration.
I am using
org.springframework.jdbc.datasource.DriverManagerDataSource
class for creating datasource.
If both the database server up/running then it's work fine.
If I close one of my database server and trying to catch the exception then it's not coming in catch block
and the stack trace error displaying on my web browser.
Stack Trace:
** BEGIN NESTED EXCEPTION **
java.net.ConnectException
MESSAGE: Connection refused: connect
STACKTRACE:
java.net.ConnectException: Connection refused: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:525)
at java.net.Socket.connect(Socket.java:475)
at java.net.Socket.(Socket.java:372)
at java.net.Socket.(Socket.java:215)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:256)
at com.mysql.jdbc.MysqlIO.(MysqlIO.java:271)
at com.mysql.jdbc.Connection.createNewIO(Connection.java:2771)
at com.mysql.jdbc.Connection.(Connection.java:1555)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:154)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:173)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:164)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnectionFromDriver(AbstractDriverBasedDataSource.java:149)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnection(AbstractDriverBasedDataSource.java:119)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:82)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167)
at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:116)
at org.hibernate.id.insert.AbstractSelectingDelegate.performInsert(AbstractSelectingDelegate.java:54)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2186)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2666)
at org.hibernate.action.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:71)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279)
at org.hibernate.event.def.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:321)
at org.hibernate.event.def.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:204)
at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:130)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:210)
at org.hibernate.event.def.DefaultSaveEventListener.saveWithGeneratedOrRequestedId(DefaultSaveEventListener.java:56)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:195)
at org.hibernate.event.def.DefaultSaveEventListener.performSaveOrUpdate(DefaultSaveEventListener.java:50)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:93)
at org.hibernate.impl.SessionImpl.fireSave(SessionImpl.java:562)
at org.hibernate.impl.SessionImpl.save(SessionImpl.java:550)
at org.hibernate.impl.SessionImpl.save(SessionImpl.java:546)
at org.springframework.orm.hibernate3.HibernateTemplate$12.doInHibernate(HibernateTemplate.java:686)
at org.springframework.orm.hibernate3.HibernateTemplate$12.doInHibernate(HibernateTemplate.java:1)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:406)
at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374)
at org.springframework.orm.hibernate3.HibernateTemplate.save(HibernateTemplate.java:683)
at com.service.RegistrationServiceImpl.add(RegistrationServiceImpl.java:56)
at com.controller.RegistrationController.onSubmit(RegistrationController.java:48)
at org.springframework.web.servlet.mvc.SimpleFormController.processFormSubmission(SimpleFormController.java:272)
at org.springframework.web.servlet.mvc.AbstractFormController.handleRequestInternal(AbstractFormController.java:268)
at org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:153)
at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:763)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:709)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:613)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:536)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:647)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879)
at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
at java.lang.Thread.run(Thread.java:619)
** END NESTED EXCEPTION *
How can I handle this exception.
I want to display in my GUI that one of my db server is down If any of my db server is down.
Please help on this.
I think there are two options:
List item You should catch the exception in your service layer (like for example in RegistrationServiceImpl), but you will have to do this in all your service classes.
Create a Servlet filter that catches this specific exception and redirect the user to a page showing something like "Sorry, the database is down".
There is #ExceptionHandler annotation in spring. You can create a handler method in some common class of yours. Whenever an exception is thrown you can conveniently show the message you want depending on the exception thrown something like this
#Component
public class BaseController {
#ExceptionHandler(Throwable.class)
public ModelAndView handleException(Throwable throwable, HttpServletRequest request){
ModelAndView view = null;
if (throwable instanceof ConnectException){
view = //set message and return view;
}else if (throwable instanceof someOtherException) {
//do something else;
//handle as many exceptions as you want
}else {
//do something by default for other unhandled/generic exceptions
}
return view;
}
}
The documentation explains more on the return types supported.
You can also make the handler to handle only the kind of exception you want, something like this:
#ExceptionHandler(ConnectException.class)
public ModelAndView handleException(ConnectException ex, HttpServletRequest request){
// handle it
}
You can check this example which is on similar lines.
I hope this helps. Cheers :)