Hive action fails in Oozie executed from Hue UI - hadoop

I am executing oozie hive action and it fails in Hue UI with the following exception in Hive logs:
2016-07-15 15:27:58,430 ERROR org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1651)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:138)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
2016-07-15 15:27:58,432 ERROR org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1651)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:138)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
Kerberos is enabled on Hadoop cluster, all the settings are added in hive-site.xml.
Also I am able to connect to hiveserver2 from command line.
Please help me to understand what is going wrong here?

It seems like you are trying to use plain authentication to connect to a secured environment.
Unsupported mechanism type PLAIN
I am not sure whether it is the same in HUE, but normally I would suspect that you need to set the hive authentication method to KERBEROS .

Related

Geomesa-accumulo add index fail job

have a problem with geomesa failed on adding indexes, maybe someones know where problem is?
geomesa-accumulo add-attribute-index -u root -p xxx -c xxx_dev_test -a asset_id --coverage full -f telemetry_values
DEBUG Looking up Accumulo Instance Id in Zookeeper for 5000 milliseconds.
DEBUG You can specify the Instance Id via the command line or
change the Zookeeper timeout by setting the system property 'instance.zookeeper.timeout'.
INFO Running map reduce index job for attributes: [asset_id] with coverage: full...
ERROR Error encountered running attribute index command. Check hadoop's job history logs for more information.
Found that no jobs created in hadoop so no logs, but in tserver logs I found
2021-01-25 12:32:05,129 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
2021-01-25 12:32:05,202 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
hadoop 3.1
accumulo 1.9.3
geomesa-accumulo 2.4.0
any advice?
geomesa logs, looks same as error as zookeeper
2021-01-25 13:29:38,762 DEBUG [org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator] IOException thrown
java.io.IOException: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:760)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:367)
at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:158)
at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.flush(ThriftTransportPool.java:346)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.send_startMultiScan(TabletClientService.java:326)
at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startMultiScan(TabletClientService.java:308)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:684)
... 6 more
Caused by: java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:475)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:159)
... 13 more
here more logs from geomesa, seems some problem with job creation
2021-01-25 13:54:36,873 WARN [org.apache.hadoop.mapred.LocalJobRunner] job_local1471203421_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.lang.NullPointerException
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:103)
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:102)
at org.locationtech.geomesa.utils.io.WithStore.apply(WithStore.scala:37)
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper.setup(AttributeIndexJob.scala:102)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
error from mapred job
Exception in thread "main" java.lang.NumberFormatException: For input string: "local1471203421"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.apache.hadoop.mapreduce.TypeConverter.toClusterTimeStamp(TypeConverter.java:111)
at org.apache.hadoop.mapreduce.TypeConverter.toYarn(TypeConverter.java:82)
at org.apache.hadoop.mapred.ClientServiceDelegate.<init>(ClientServiceDelegate.java:121)
at org.apache.hadoop.mapred.ClientCache.getClient(ClientCache.java:68)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:870)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:215)
at org.apache.hadoop.mapreduce.tools.CLI.getJob(CLI.java:660)
at org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:470)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.mapred.JobClient.main(JobClient.java:1277)
hadoop 3.1 not support this feature, need 3.2 update

Not able to connect Apache drill/dremio with Nifi 1.8.0

I am not able to connect with apache drill with nifi-1.8.0 but able to connect with nifi-1.7.1.
I am using ExecuteSql processor to connect with apache drill using jdbc connection string.
In the case of nifi-1.8.0 poolableExcecption is coming.
Unable to execute SQL select query SELECT * from test.wellRecords due to org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create PoolableConnectionFactory (null). No FlowFile to route to failure: org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create PoolableConnectionFactory (null)
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create PoolableConnectionFactory (null)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:308)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy95.getConnection(Unknown Source)
at org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:195)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot create PoolableConnectionFactory (null)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2385)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
... 19 common frames omitted
Caused by: java.sql.SQLFeatureNotSupportedException: null
at oadd.org.apache.calcite.avatica.Helper.unsupported(Helper.java:53)
at oadd.org.apache.calcite.avatica.AvaticaConnection.isValid(AvaticaConnection.java:325)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.isValid(DrillConnectionImpl.java:657)
at org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:874)
at org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:389)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2398)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
... 22 common frames omitted
I want to connect with nifi-1.8.0 processor because all the nifi configuration is written in this current nifi version.
I cannot comment yet on the question, so here are some thoughts that will hopefully be of help.
Are you using the same jdbc driver as in nifi-1.7?
Nifi has dropped commons-dbcp-1.4 in favour of commons-dbcp2-2.5 in version 1.8.
Link to pom.xml
From reading the stacktrace there is a problem validating the connection to the database server. Maybe others have similar issues? Are you setting a validation query?

Impala JDBC connection: Error setting/closing session: Open Session Error

If I have the following type of impala connection, can I still use SquirreL SQL:
self.impala_con = connect(host='sql.edamame.com', port=25003, use_ssl=True, auth_mechanism="PLAIN",
user='edamame1', password='edamamePass')
Here is how I set the Alias in Squirrel:
I got the following errors during test connection:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: [Simba][ImpalaJDBCDriver](500151) Error setting/closing session: Open Session Error.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Update:
I also tried the URL below:
jdbc:impala://sql.edamame.com:25003/default;AuthMech=3;SSL=1
and get the following new errors:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: [Simba][ImpalaJDBCDriver](500310) Invalid operation: null;
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
What else I might be missing? Thanks!
Short answer: RTFM.
Long answer: the Cloudera JDBC driver ships with a 80+ pages PDF "Install and Config guide". Don't be shy, open it. You may go directly to Appendix A - driver config and look at these entries:
AuthMech The authentication mechanism to use.
Set the value to one of the following numbers: 0 for No Authentication 1 for Kerberos 2 for User Name 3 for User Name and Password
SSL
When this property is set to 1, the driver communicates with the Impala server through an SSL-enabled socket. When this property is set to 0, the driver does not connect to SSL-enabled sockets.
So your URL should look like
jdbc:impala://blahblah:25003/default;AuthMech=3;SSL=1
One more thing: to troubleshoot SSL handshake issues, you may enable the debug traces with this Java system property...
-Djavax.net.debug=ssl

Facing issue in Hiveserver2 service

I am using hiveserver2 for connection but after running few jobs in hive i am getting below error in hive log.
2014-12-30 05:55:54,594 ERROR server.TThreadPoolServer (TThreadPoolServer.java:run(234)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:209)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
And after this i am not able to connect to hiveserver2 and after restarting hiveserver2 service i am able to connect.
What is the root cause of this issue?
Thanks

Could not start jstatd on ubuntu server

I want to setup two servers with jstatd running so I can monitoring my applications on the fly. The web server have been up and running, but another server always get some exceptions like this.
Could not bind /JStatRemoteHost to RMI Registry
java.rmi.ServerException: RemoteException occurred in server thread;
nested exception is: java.rmi.UnmarshalException: error
unmarshalling arguments; nested exception is:
java.lang.ClassNotFoundException:
sun.jvmstat.monitor.remote.RemoteHost (no security manager: RMI class
loader disabled) at
sun.rmi.server.UnicastServerRef.oldDispatch(UnicastServerRef.java:419)
at
sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:267)
at sun.rmi.transport.Transport$1.run(Transport.java:177) at
sun.rmi.transport.Transport$1.run(Transport.java:174) at
java.security.AccessController.doPrivileged(Native Method) at
sun.rmi.transport.Transport.serviceCall(Transport.java:173) at
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722) at
sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:273)
at
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:251)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:377) at
sun.rmi.registry.RegistryImpl_Stub.rebind(Unknown Source) at
java.rmi.Naming.rebind(Naming.java:177) at
sun.tools.jstatd.Jstatd.bind(Jstatd.java:57) at
sun.tools.jstatd.Jstatd.main(Jstatd.java:143) Caused by:
java.rmi.UnmarshalException: error unmarshalling arguments; nested
exception is: java.lang.ClassNotFoundException:
sun.jvmstat.monitor.remote.RemoteHost (no security manager: RMI class
loader disabled) at
sun.rmi.registry.RegistryImpl_Skel.dispatch(Unknown Source) at
sun.rmi.server.UnicastServerRef.oldDispatch(UnicastServerRef.java:409)
at
sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:267)
at sun.rmi.transport.Transport$1.run(Transport.java:177) at
sun.rmi.transport.Transport$1.run(Transport.java:174) at
java.security.AccessController.doPrivileged(Native Method) at
sun.rmi.transport.Transport.serviceCall(Transport.java:173) at
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722) Caused by:
java.lang.ClassNotFoundException:
sun.jvmstat.monitor.remote.RemoteHost (no security manager: RMI class
loader disabled) at
sun.rmi.server.LoaderHandler.loadProxyClass(LoaderHandler.java:554)
at
java.rmi.server.RMIClassLoader$2.loadProxyClass(RMIClassLoader.java:646)
at
java.rmi.server.RMIClassLoader.loadProxyClass(RMIClassLoader.java:311)
at
sun.rmi.server.MarshalInputStream.resolveProxyClass(MarshalInputStream.java:263)
at
java.io.ObjectInputStream.readProxyDesc(ObjectInputStream.java:1556)
at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1512)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1769)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1348)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
... 13 more
I am sure jstatd loaded the policy file which content is
grant codebase "file:${java.home}/../lib/tools.jar" { permission
java.security.AllPermission; };
I cannot figure whats the problem come from, please help.
I had the same problem as you.
As far as I understood the exception occurred on the rmi registry side because it couldn't find the class sun.jvmstat.monitor.remote.RemoteHost located inside tools.jar.
In my case the solution was to specify the java.rmi.server.codebase property when starting rmiregistry. After specifying the codebase property the problem gone.
Please try to start the rmiregistry using the following command for Linux/Solaris:
rmiregistry
-J-Djava.rmi.server.codebase=file:${java.home}/../lib/tools.jar &
or for Windows (though I didn't test it fully)
start rmiregistry
-J-Djava.rmi.server.codebase="%JAVA_HOME%/../lib/tools.jar"
Hope it will help you.

Resources