Cloudera Community Post
Using Hue in Cloudera 5.4.4, when trying to run a Sqoop 2 Job it says the (i) The job is starting... but it never actually runs the job. I see nothing in the Job Browser and nothing in the job's SUBMISSIONS list. I also do not see any errors in the logs.
This is on a stock Cloudera 5.4.4 (QuickStart VM) which I'm assuming has all of the components pre-configured correctly. Unfortunately this is all I have to go on is the lack of error messages & helpful reporting.
on CDH5, the sqoop 2 server does make some logs available under /var/log/sqoop2/sqoop.log. In my case, the error was not providing a partition column. This is an optional parameter, but the error reporting in the Hue interface doesn't surface any of these details.
2015-07-27 07:52:09,728 ERROR server.SqoopProtocolServlet [org.apache.sqoop.server.SqoopProtocolServlet.doPut(SqoopProtocolServlet.java:84)] Exception in PUT http://quickstart.cloudera:12000/sqoop/v1/job/1/start
org.apache.sqoop.common.SqoopException: GENERIC_JDBC_CONNECTOR_0005:No column is found to partition data
at org.apache.sqoop.connector.jdbc.GenericJdbcFromInitializer.configurePartitionProperties(GenericJdbcFromInitializer.java:147)
at org.apache.sqoop.connector.jdbc.GenericJdbcFromInitializer.initialize(GenericJdbcFromInitializer.java:51)
at org.apache.sqoop.connector.jdbc.GenericJdbcFromInitializer.initialize(GenericJdbcFromInitializer.java:40)
at org.apache.sqoop.driver.JobManager.initializeConnector(JobManager.java:449)
at org.apache.sqoop.driver.JobManager.createJobRequest(JobManager.java:372)
at org.apache.sqoop.driver.JobManager.start(JobManager.java:276)
at org.apache.sqoop.handler.JobRequestHandler.startJob(JobRequestHandler.java:379)
at org.apache.sqoop.handler.JobRequestHandler.handleEvent(JobRequestHandler.java:115)
at org.apache.sqoop.server.v1.JobServlet.handlePutRequest(JobServlet.java:96)
at org.apache.sqoop.server.SqoopProtocolServlet.doPut(SqoopProtocolServlet.java:79)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:277)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:555)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:745)
Related
I am using Sqoop for importing data from oracle to HDFS. When Job starts it stucks in 5% of progress for about 1 hours and this info is outputs:
INFO mapreduce.Job: Task Id : attempt_1535519556038_0015_m_000037_0, Status : FAILED
Container launch failed for container_1535519556038_0015_01_000043 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
This token is expired. current time is 1536133107764 found 1536133094775
Note: System times on machines may be out of sync. Check system time and time zones.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
and then it continues until the jobs successfully terminate and all the data imported. So, My question is What is the reason for hanging the job in 5% of progress? Why is it self-correcting? Is it normal? If not, Is it possible to relate to that issued info? How can I fix that?
The error message clearly explains “Unauthorized request to start container.
This token is expired”.
One of the options would be increasing lifespan of container by setting:
yarn.resourcemanager.rm.container-allocation.expiry-interval-ms which is by default is 10 minutes.
Note: The jobs will work if you increase the yarn.resourcemanager.rm.container-allocation.expiry-interval-ms in the yarn-site.xml config file.
<property>
<name>yarn.resourcemanager.rm.container-allocation.expiry-interval-ms</name>
<value>1000000</value>
</property>
I have an AWS ElasticSearch t2.medium instance with 2 nodes running, and hardly any load on it. Still it is crashing all the time.
I see the following graph for the metric JVMMemoryPressure:
When I go to Kibana, I see the following error message:
Questions:
Do I interpret correctly that the machines only have 64 MB of memory available, instead of the 4 GB that should be associated with this instance type? Is there another place to verify the absolute amount of heap memory, instead of on Kibana only when it is going wrong?
If so, how can I change this behavior?
If this is normal, where can I look for possible causes of ElasticSearch crashing whenever the memory footprint reaches 100%. I have only very small load on the instance.
In the logging of the instance, I see a lot of warnings, e.g. the ones below. They don't provide any clue for where to start debugging the issue.
[2018-08-15T07:36:37,021][WARN ][r.suppressed ] path: __PATH__ params:
{}
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [__PATH__ master];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.handleBlockExceptions(TransportBulkAction.java:387) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:273) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$2.onTimeout(TransportBulkAction.java:421) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:317) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:244) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:578) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.1.jar:6.0.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_172]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
or
[2018-08-15T07:36:37,691][WARN ][o.e.d.z.ZenDiscovery ] [U1DMgyE] not enough master nodes discovered during pinging (found [[Candidate{node={U1DMgyE}{U1DMgyE1Rn2gId2aRgRDtw}{F-tqTFGDRZaovQF8ILC44w}{__IP__}{__IP__}{__AMAZON_INTERNAL__, __AMAZON_INTERNAL__}, clusterStateVersion=207939}]], but needed [2]), pinging again
or
[2018-08-15T07:36:42,303][WARN ][o.e.t.n.Netty4Transport ] [U1DMgyE] write and flush on the network layer failed (channel: [id: 0x385d3b63, __PATH__ ! __PATH__])
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method) ~[?:1.8.0_172]
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51) ~[?:1.8.0_172]
at sun.nio.ch.IOUtil.write(IOUtil.java:148) ~[?:1.8.0_172]
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504) ~[?:1.8.0_172]
at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:432) ~[netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:856) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:368) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:638) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
I have learned that that number is incorrect. I don't know where it is coming from. To get the correct memory usage, one runs the following query:
GET "<es_url>:9200/_nodes/stats"
If you're looking for only memory usage, use GET /"<es_url>:9200/_cat/nodes?h=heap* - it gives a more readable response like below.
{
"payload": [
{
"heap.current": "4.1gb",
"heap.max": "15.9gb",
"heap.percent": "25"
},
{
"heap.current": "3.9gb",
"heap.max": "15.9gb",
"heap.percent": "24"
},
...
}
_nodes/stats is elaborate with all other details also, though.
IN MY HADOOP 2.6.5 HA and oozie (use oozie-4.1.0-cdh5.12.1) when I run the oozie example.
[oozie#master shell]$ cat job.properties
nameNode=hdfs://cluster1:8020
jobTracker=master:8032
queueName=default
examplesRoot=examples
oozie.wf.application.path=${nameNode}/user/oozie/${examplesRoot}/apps/shell
[hadoop#master sbin]$
[hadoop#master sbin]$ oozie job -oozie http://master.bigdata.com:11000/oozie -config /home/hadoop/app/oozie/examples/apps/map-reduce/job.properties -run
Error: HTTP error code: 500 : Internal Server Error
[hadoop#master sbin]$
[hadoop#master shell]$ oozie job -oozie http://master.bigdata.com:11000/oozie -config /home/hadoop/app/oozie/examples/apps/shell/job.properties -run
Error: HTTP error code: 500 : Internal Server Error
[hadoop#master shell]$
The error is:
[oozie#master logs]$ pwd
/home/hadoop/app/oozie/logs
[oozie#master logs]$ vi oozie.log
2017-09-06 00:33:19,850 WARN AuthenticationFilter:532 - SERVER[master.bigdata.com] AuthenticationToken ignored: org.apache.hadoop.security.authentication.util.SignerException: Invalid signature
2017-09-06 00:33:19,924 ERROR SubmitXCommand:517 - SERVER[master.bigdata.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Error,
java.lang.NoSuchMethodError: org.apache.hadoop.fs.FsServerDefaults.<init>(JIISIZJLorg/apache/hadoop/util/DataChecksum$Type;)V
at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1327)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getServerDefaults(ClientNamenodeProtocolTranslatorPB.java:267)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:260)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy28.getServerDefaults(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:996)
at org.apache.hadoop.hdfs.DFSClient.shouldEncryptData(DFSClient.java:2032)
at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:2038)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:208)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:159)
at org.apache.hadoop.hdfs.net.TcpPeerServer.peerFromSocketAndKey(TcpPeerServer.java:90)
at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3093)
at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:354)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:617)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:841)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:889)
at java.io.DataInputStream.read(DataInputStream.java:149)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.Reader.read(Reader.java:140)
at org.apache.oozie.util.IOUtils.copyCharStream(IOUtils.java:171)
at org.apache.oozie.service.WorkflowAppService.readDefinition(WorkflowAppService.java:135)
at org.apache.oozie.service.LiteWorkflowAppService.parseDef(LiteWorkflowAppService.java:46)
at org.apache.oozie.command.wf.SubmitXCommand.execute(SubmitXCommand.java:165)
at org.apache.oozie.command.wf.SubmitXCommand.execute(SubmitXCommand.java:76)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.DagEngine.submitJob(DagEngine.java:114)
at org.apache.oozie.servlet.V1JobsServlet.submitWorkflowJob(V1JobsServlet.java:192)
at org.apache.oozie.servlet.V1JobsServlet.submitJob(V1JobsServlet.java:92)
at org.apache.oozie.servlet.BaseJobsServlet.doPost(BaseJobsServlet.java:102)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:643)
at org.apache.oozie.servlet.JsonRestServlet.service(JsonRestServlet.java:289)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
at org.apache.oozie.command.wf.SubmitXCommand.execute(SubmitXCommand.java:76)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.DagEngine.submitJob(DagEngine.java:114)
at org.apache.oozie.servlet.V1JobsServlet.submitWorkflowJob(V1JobsServlet.java:192)
at org.apache.oozie.servlet.V1JobsServlet.submitJob(V1JobsServlet.java:92)
at org.apache.oozie.servlet.BaseJobsServlet.doPost(BaseJobsServlet.java:102)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:643)
at org.apache.oozie.servlet.JsonRestServlet.service(JsonRestServlet.java:289)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.oozie.servlet.AuthFilter$2.doFilter(AuthFilter.java:171)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:579)
at org.apache.oozie.servlet.AuthFilter.doFilter(AuthFilter.java:176)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.oozie.servlet.HostnameFilter.doFilter(HostnameFilter.java:86)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:610)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:503)
at java.lang.Thread.run(Thread.java:745)
2017-09-06 00:33:58,499 INFO StatusTransitService$StatusTransitRunnable:520 - SERVER[master.bigdata.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] Acquired lock for [org.apache.oozie.service.StatusTransitService]
IN MY HADOOP 2.6.5 HA and oozie (use oozie-4.1.0-cdh5.12.1)
oozie-4.1.0+cdh5.12.1 is primarily target to work with hadoop-2.6.0+cdh5.12.1
Trying to mix versions or compile any later versions yourself is only asking for issues.
Specifically, you have a CLASSPATH issue
java.lang.NoSuchMethodError: org.apache.hadoop.fs.FsServerDefaults.<init>
If you insist on using Cloudera packaging, you can find the necessary downloads here.
https://www.cloudera.com/documentation/enterprise/release-notes/topics/cm_vd_cdh_package_tarball_512.html#cm_vd_cdh_package_tarball_512
My recommendation would be to install Cloudera Manager and let it install and configure the CDH components for you
This is an issue that is simply driving me nuts. I have a one machine Storm instance running on my Local LAN. I am currently running v0.9.1-incubating release version (from the Apache Incubator site. The issue is simply that my storm supervisor process refuses to start after EVERY SINGLE reboot. The hack fix is quite simple, remove the supervisor and workers folders from the storm local directory and re run the process; things run hunky dory then on until next reboot.
I'm providing every bit of information I think might be relevant to debug this issue. Please ask for more if needed, but just help me get some resolution.
PS: It doesn't matter if I have topologies running or not.
Zookeeper version: 3.4.5
Storm version: 0.9.1-incubating (uses Netty transport)
Both Storm and Zookeeper run on the same machine.
supervisord version: 3.0b2
OS: Ubuntu 12.04 LTS
Processor: AMD Phenom(tm) II X6 1055T Processor × 6
RAM: 5.6 GiB
Supervisor config
[program:zookeeper]
command=/path/to/zookeeper/bin/zkServer.sh "start-foreground"
process_name=zookeeper
directory=/path/to/zookeeper/bin
stdout_logfile=/var/log/zookeeper.log ; stdout log path, NONE$
stderr_logfile=/var/log/err.zookeeper.log ; stderr log path, $
priority=2
user=root
[program:storm-nimbus]
command=/path/to/storm/bin/storm nimbus
user=root
autostart=true
autorestart=true
startsecs=10
startretries=2
log_stdout=true
log_stderr=true
stderr_logfile=/var/log/storm/nimbus.err.log
stdout_logfile=/var/log/storm/nimbus.out.log
logfile_maxbytes=20MB
logfile_backups=2
priority=10
[program:storm-ui]
command=/path/to/storm/bin/storm ui
user=root
autostart=true
autorestart=true
startsecs=10
startretries=2
log_stdout=true
log_stderr=true
stderr_logfile=/var/log/storm/ui.err.log
stdout_logfile=/var/log/storm/ui.out.log
logfile_maxbytes=20MB
logfile_backups=2
priority=500
[program:storm-supervisor]
command=/path/to/storm/bin/storm supervisor
user=root
autostart=true
autorestart=true
startsecs=10
startretries=2
log_stdout=true
log_stderr=true
stderr_logfile=/var/log/storm/supervisor.err.log
stdout_logfile=/var/log/storm/supervisor.log.log
logfile_maxbytes=20MB
logfile_backups=2
priority=600
[program:storm-logviewer]
command=/path/to/storm/bin/storm logviewer
user=root
autostart=true
autorestart=true
startsecs=10
startretries=2
log_stdout=true
log_stderr=true
stderr_logfile=/var/log/storm/log.err.log
stdout_logfile=/var/log/storm/log.out.log
logfile_maxbytes=20MB
logfile_backups=2
priority=900
Storm config
#Zookeeper
storm.zookeeper.servers:
- "192.168.1.11"
# Nimbus
nimbus.host: "192.168.1.11"
nimbus.childopts: '-Xmx1024m -Djava.net.preferIPv4Stack=true -Dprocess=storm'
# UI
ui.port: 9090
ui.childopts: "-Xmx768m -Djava.net.preferIPv4Stack=true -Dprocess=storm"
# Supervisor
supervisor.childopts: '-Djava.net.preferIPv4Stack=true -Dprocess=storm'
# Worker
worker.childopts: '-Xmx768m -Djava.net.preferIPv4Stack=true -Dprocess=storm'
storm.local.dir: "/path/to/storm"
storm.messaging.transport: "backtype.storm.messaging.netty.Context"
storm.messaging.netty.server_worker_threads: 1
storm.messaging.netty.client_worker_threads: 1
storm.messaging.netty.buffer_size: 5242880
storm.messaging.netty.max_retries: 100
storm.messaging.netty.max_wait_ms: 1000
storm.messaging.netty.min_wait_ms: 100
Error message
Pastebin for log error message. I'm cross posting the relevant bits here.
java.lang.RuntimeException: java.io.EOFException
at backtype.storm.utils.Utils.deserialize(Utils.java:86) ~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at backtype.storm.utils.LocalState.snapshot(LocalState.java:45) ~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at backtype.storm.utils.LocalState.get(LocalState.java:56) ~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at backtype.storm.daemon.supervisor$sync_processes.invoke(supervisor.clj:207) ~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:161) [clojure-1.4.0.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151) [clojure-1.4.0.jar:na]
at clojure.core$apply.invoke(core.clj:603) ~[clojure-1.4.0.jar:na]
at clojure.core$partial$fn__4070.doInvoke(core.clj:2343) ~[clojure-1.4.0.jar:na]
at clojure.lang.RestFn.invoke(RestFn.java:397) ~[clojure-1.4.0.jar:na]
at backtype.storm.event$event_manager$fn__2593.invoke(event.clj:39) ~[na:na]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:679) [na:1.6.0_27]
Caused by: java.io.EOFException: null
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2322) ~[na:1.6.0_27]
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2791) ~[na:1.6.0_27]
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:798) ~[na:1.6.0_27]
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:298) ~[na:1.6.0_27]
at backtype.storm.utils.Utils.deserialize(Utils.java:81) ~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
... 11 common frames omitted
2014-03-11 12:27:25 b.s.util [INFO] Halting process: ("Error when processing an event")
We had that exact same problem (supervisor crashing on start and same log error message) when we had a power outage on 2 of our development servers. I guess just stopping the server without previously stopping the supervisor would have the same effect.
The only working solution we found was to remove the "storm-local/supervisor" folder (I guess something in there got corrupted).
I too faced this similar issue. I used to remove the local folder always and restart the topology.
I have integrated a simple service call in Kony, When I run my App on iOS simulator i cannot see any errors, instead in the middleware.log files I can see below error, Can anyone please help.
Logs
X-Forwarded-For=null] 12:15:55,270 DEBUG factory.KonyAppFactory - Memcache is Enabled and Memcache Session Management instance is created
[appID=ProjectBestBuy01
requestID=61c164cc-1743-437f-8288-8e6ec0903d86
UA=BestBuy/1.0 CFNetwork/711.1.12 Darwin/15.0.0
rcid=NA
referer=NA
node.no=1
REMOTEADDRESS=127.0.0.1
REMOTEADDRESS=127.0.0.1
X-Forwarded-For=null] 11:39:51,112 ERROR cache.MemCacheWrapper - Unable to store object in Memcache node /127.0.0.1:21201 for Key: 10e6351c7-fe0d-44e4-bbc5-3e448d0af7a6 in 3 attempt from Server Node Num: 1
java.util.concurrent.TimeoutException: Timed out waiting for operation
at net.spy.memcached.MemcachedClient$OperationFuture.get(MemcachedClient.java:1656)
at com.konylabs.middleware.cache.MemCacheWrapper.storebyNoofattempts(MemCacheWrapper.java:277)
at com.konylabs.middleware.cache.MemCacheWrapper.store(MemCacheWrapper.java:235)
at com.konylabs.middleware.common.AbstractCacheSessionManager.store2Cache(AbstractCacheSessionManager.java:280)
at com.konylabs.middleware.common.AbstractCacheSessionManager.exit(AbstractCacheSessionManager.java:212)
at com.konylabs.middleware.common.MemCacheDCFilterAction.doChainDCFilter(MemCacheDCFilterAction.java:115)
at com.konylabs.middleware.common.MiddlewareMemCacheDCFilter.doFilter(MiddlewareMemCacheDCFilter.java:42)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at com.konylabs.middleware.common.XSSFilter.doFilter(XSSFilter.java:169)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at com.konylabs.middleware.common.AddAdditionalResponseHeaderAttribute.doFilter(AddAdditionalResponseHeaderAttribute.java:94)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:313)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[appID=ProjectBestBuy01
requestID=3dc52d1e-9b5a-45d5-8b64-92f8cf1f0cb7
UA=BestBuy/1.0 CFNetwork/711.1.12 Darwin/15.0.0
rcid=NA
You question don't have enough information.
1.Are you trying to save any information on middle ware meme cache ?
2.if yes , how you are trying to save preprocessor,postprocessor or javaservice ?
3.Are you set scope as session in service def. file ?