Hi I am trying to upload multiple files into artifactory repo using my Jenkinsfile, Please find the snippet and error pasted below:
Artifactory Plug in version - 2.16.2 and Jenkins 2.138.2
snippet from Jenkinsfile and message from the logs
def uploadSpec = """{
"files": [
{
"pattern": "folder/test.py",
"target": "reponame-local/folder/"
}
]
}"""
def buildInfo1 = server.upload spec: uploadSpec
I get the below error pasted all logs here Invalid object ID
SEVERE: Failed to execute command Pipe.Flush(-1) (channel JNLP4-connect connection from 10.37.88.189/10.37.88.189:41732)
java.util.concurrent.ExecutionException: Invalid object ID -1 iota=25377
at hudson.remoting.ExportTable.diagnoseInvalidObjectId(ExportTable.java:478)
at hudson.remoting.ExportTable.get(ExportTable.java:397)
at hudson.remoting.Channel.getExportedObject(Channel.java:780)
at hudson.remoting.ProxyOutputStream$Flush.execute(ProxyOutputStream.java:307)
at hudson.remoting.Channel$1.handle(Channel.java:565)
at hudson.remoting.AbstractByteBufferCommandTransport.processCommand(AbstractByteBufferCommandTransport.java:203)
at hudson.remoting.AbstractByteBufferCommandTransport.receive(AbstractByteBufferCommandTransport.java:189)
at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onRead(ChannelApplicationLayer.java:187)
at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecv(ApplicationLayer.java:207)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:669)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processRead(SSLEngineFilterLayer.java:369)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecv(SSLEngineFilterLayer.java:117)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:669)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRead(NetworkLayer.java:136)
at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:160)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Object appears to be deallocated at lease before Sun Nov 18 20:34:02 MST 2018
at hudson.remoting.ExportTable.diagnoseInvalidObjectId(ExportTable.java:474)
java.lang.Exception: Error occurred during operation, please refer to logs for more information.
at org.jfrog.build.extractor.producerConsumer.ProducerConsumerExecutor.start(ProducerConsumerExecutor.java:84)
at org.jfrog.build.extractor.clientConfiguration.util.spec.SpecsHelper.uploadArtifactsBySpec(SpecsHelper.java:71)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:190)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from xx.xx.xxx.xx/xx.xx.xxx.xx:41732
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1071)
at hudson.FilePath.act(FilePath.java:1060)
at org.jfrog.hudson.pipeline.executors.GenericUploadExecutor.execution(GenericUploadExecutor.java:52)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:65)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:46)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
at hudson.security.ACL.impersonate(ACL.java:290)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Caused: java.lang.RuntimeException: Failed uploading artifacts by spec
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:194)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:131)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3085)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
at java.lang.Thread.run(Thread.java:748)
It seems wontedly created issue by infra. Please try with user having write access and these will only can resolved by Infra team.
Related
I want to drop the duplicate views, but I get this error.The tables are in parquet format.
query: drop view esquema.tabla
This is the error.
SQL Error [1] [08S01]: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string)
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257)
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string)
at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1200)
at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1131)
at org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:4176)
at org.apache.hadoop.hive.ql.exec.DDLTask.dropTableOrPartitions(DDLTask.java:4038)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:379)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255)
... 11 more
Caused by: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:6143)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1932)enter code here
at sun.reflect.GeneratedMethodAccessor409.invoke(Unknown Source)
Hi I am receiving this error when I am trying to connect to the Oracle DB
I already tried reconnecting the DB again and again and also tried to close and open the application again
Here is the Stack trace of error I am getting
java.lang.AbstractMethodError: Method oracle/jdbc/driver/T4CCallableStatement.isClosed()Z is abstract
at
oracle.jdbc.driver.T4CCallableStatement.isClosed(T4CCallableStatement.java)
at
oracle.dbtools.raptor.proxy.driver.oracle.RaptorProxyOJDBCStatement$StatementCloser.close(RaptorProxyOJDBCStatement.java:61)
at
oracle.dbtools.raptor.proxy.ProxyContext.referentDequeued(ProxyContext.java:248)
at
oracle.dbtools.util.GCTracker.handlePendingReap(GCTracker.java:109)
at oracle.dbtools.util.GCTracker.track(GCTracker.java:60) at
oracle.dbtools.raptor.proxy.AbstractProxy.initialize(AbstractProxy.java:49)
at
oracle.dbtools.raptor.proxy.AbstractProxy.initialize(AbstractProxy.java:59)
at
oracle.dbtools.raptor.proxy.AbstractProxy.getProxyContext(AbstractProxy.java:131)
at
oracle.dbtools.raptor.proxy.ProxyUtils.formatArg(ProxyUtils.java:72)
at
oracle.dbtools.raptor.proxy.ProxyLogger.logPostForAll(ProxyLogger.java:221)
at
oracle.dbtools.raptor.proxy.ProxyLogger.stateChanged(ProxyLogger.java:48)
at
oracle.dbtools.raptor.proxy.MethodTracker.fireStateChanged(MethodTracker.java:103)
at
oracle.dbtools.raptor.proxy.MethodTracker.access$900(MethodTracker.java:18)
at
oracle.dbtools.raptor.proxy.MethodTracker$CallMetrics.setState(MethodTracker.java:280)
at
oracle.dbtools.raptor.proxy.MethodTracker$CallMetrics.access$400(MethodTracker.java:236)
at
oracle.dbtools.raptor.proxy.MethodTracker.methodReturned(MethodTracker.java:178)
at
oracle.dbtools.raptor.proxy.AbstractProxy.methodCompleted(AbstractProxy.java:122)
at
oracle.dbtools.raptor.proxy.AbstractProxy.methodReturned(AbstractProxy.java:97)
at
oracle.dbtools.raptor.proxy.driver.oracle.RaptorProxyOJDBCConnection.postFor_prepareCall(RaptorProxyOJDBCConnection.java:294)
at
oracle.jdbc.proxy.oracle$1dbtools$1raptor$1proxy$1driver$1oracle$1RaptorProxyOJDBCConnection$2oracle$1jdbc$1internal$1OracleConnection$$$Proxy.prepareCall(Unknown
Source)
at
oracle.dbtools.db.OracleUtil.checkAccessImpl(OracleUtil.java:446)
at
oracle.dbtools.db.DBUtil.checkAccess(DBUtil.java:1815)
at
oracle.dbtools.db.AccessCache.checkAccess(AccessCache.java:64)
at
oracle.dbtools.db.AccessCache.hasAccess(AccessCache.java:55)
at
oracle.dbtools.db.AccessCache.hasAccess(AccessCache.java:24)
at
oracle.dbtools.raptor.query.QueryUtils.checkNonOracleAccess(QueryUtils.java:610)
at
oracle.dbtools.raptor.query.QueryUtils.getQuery(QueryUtils.java:410)
at
oracle.dbtools.raptor.query.QueryUtils.getQuery(QueryUtils.java:329)
at
oracle.dbtools.raptor.query.ObjectQueries.getQuery(ObjectQueries.java:46)
at
oracle.dbtools.raptor.navigator.db.xml.XmlObjectFactory.createFolderInstance(XmlObjectFactory.java:63)
at
oracle.dbtools.raptor.navigator.db.xml.XmlTypeOwnerInstance.listTypeFolders(XmlTypeOwnerInstance.java:93)
at
oracle.dbtools.raptor.navigator.db.impl.TypeContainerTreeNode.loadTypeFolders(TypeContainerTreeNode.java:121)
at
oracle.dbtools.raptor.navigator.db.impl.DatabaseTreeNode$LoadTask.doWork(DatabaseTreeNode.java:190)
at
oracle.dbtools.raptor.navigator.db.impl.DatabaseTreeNode$LoadTask.doWork(DatabaseTreeNode.java:119)
at
oracle.dbtools.raptor.backgroundTask.RaptorTask.call(RaptorTask.java:199)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
oracle.dbtools.raptor.backgroundTask.RaptorTaskManager$RaptorFutureTask.run(RaptorTaskManager.java:702)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at
java.lang.Thread.run(Thread.java:748)
=== UPDATE === (see original post below)
Though the bug fix initiated by zyro brought some improvements, there error is still not disappeared completely. It still looks like this:
31-Jan-2018 19:30:53.529 INFO [MessageBroker-3] org.apache.coyote.AbstractProcessor.setErrorState An error occurred in processing while o
n a non-container thread. The connection will be closed immediately
java.io.IOException: APR error: -32
at org.apache.coyote.http11.InternalAprOutputBuffer.writeToSocket(InternalAprOutputBuffer.java:291)
at org.apache.coyote.http11.InternalAprOutputBuffer.writeToSocket(InternalAprOutputBuffer.java:244)
at org.apache.coyote.http11.InternalAprOutputBuffer.flushBuffer(InternalAprOutputBuffer.java:213)
at org.apache.coyote.http11.AbstractOutputBuffer.flush(AbstractOutputBuffer.java:305)
at org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:765)
at org.apache.coyote.Response.action(Response.java:177)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:349)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:317)
at org.apache.catalina.connector.Response.flushBuffer(Response.java:510)
at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:318)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at org.springframework.boot.web.support.ErrorPageFilter$ErrorWrapperResponse.flushBuffer(ErrorPageFilter.java:318)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at org.springframework.security.web.util.OnCommittedResponseWrapper.flushBuffer(OnCommittedResponseWrapper.java:159)
at org.springframework.http.server.ServletServerHttpResponse.flush(ServletServerHttpResponse.java:96)
at org.springframework.web.socket.sockjs.transport.session.AbstractHttpSockJsSession.writeFrameInternal(AbstractHttpSockJsSession
.java:350)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession.writeFrame(AbstractSockJsSession.java:318)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession.sendHeartbeat(AbstractSockJsSession.java:251)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession$HeartbeatTask.run(AbstractSockJsSession.java:455
)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
So actually only the second part of the error disappeared...
=== ORIGINAL POST ===
I find a lot off following erros in my Tomcat 8.0.28 catalina.out logs. I guess it is caused by the websocket plugin, which I used. I did the implementation in a service class like following:
def publishNewChart(def productId) {
def linechart = chartService.getLinechart(Product.load(productId));
brokerMessagingTemplate.convertAndSend("/topic/trade_chart/" + productId, linechart);
}
And on the client side following:
$(function() {
var socket = new SockJS("${createLink(uri: '/stomp')}");
var client = Stomp.over(socket);
client.connect({}, function() {
client.subscribe("/topic/trade_chart/${product?.id}", function(message) {
var messageJson = $.parseJSON(message.body);
refreshPriceChart($.parseJSON(messageJson.data));
brushed();
});
});
});
Does anybody have a clue, where the errors come from and how to deal with them?
Thanks a log in advance!
02-Mar-2017 13:15:33.349 INFO [MessageBroker-4] org.apache.coyote.AbstractProcessor.setErrorState An error occurred in processing whi
le on a non-container thread. The connection will be closed immediately
java.io.IOException: APR error: -32
at org.apache.coyote.http11.InternalAprOutputBuffer.writeToSocket(InternalAprOutputBuffer.java:291)
at org.apache.coyote.http11.InternalAprOutputBuffer.writeToSocket(InternalAprOutputBuffer.java:244)
at org.apache.coyote.http11.InternalAprOutputBuffer.flushBuffer(InternalAprOutputBuffer.java:213)
at org.apache.coyote.http11.AbstractOutputBuffer.flush(AbstractOutputBuffer.java:305)
at org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:765)
at org.apache.coyote.Response.action(Response.java:177)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:349)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:317)
at org.apache.catalina.connector.Response.flushBuffer(Response.java:510)
at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:318)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at org.springframework.boot.web.support.ErrorPageFilter$ErrorWrapperResponse.flushBuffer(ErrorPageFilter.java:311)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at org.springframework.security.web.util.OnCommittedResponseWrapper.flushBuffer(OnCommittedResponseWrapper.java:158)
at org.springframework.http.server.ServletServerHttpResponse.flush(ServletServerHttpResponse.java:96)
at org.springframework.web.socket.sockjs.transport.session.AbstractHttpSockJsSession.writeFrameInternal(AbstractHttpSockJsSes
sion.java:350)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession.writeFrame(AbstractSockJsSession.java:322)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession.sendHeartbeat(AbstractSockJsSession.java:255
)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession$HeartbeatTask.run(AbstractSockJsSession.java
:451)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
ERROR org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler - Unexpected error occurred in scheduled task.
org.springframework.web.socket.sockjs.SockJsTransportFailureException: Failed to write SockJsFrame content='h'; nested exception is o
rg.apache.catalina.connector.ClientAbortException: java.io.IOException: APR error: -32
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession.writeFrame(AbstractSockJsSession.java:339)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession.sendHeartbeat(AbstractSockJsSession.java:255
)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession$HeartbeatTask.run(AbstractSockJsSession.java
:451)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: APR error: -32
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:353)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:317)
at org.apache.catalina.connector.Response.flushBuffer(Response.java:510)
at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:318)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at org.springframework.boot.web.support.ErrorPageFilter$ErrorWrapperResponse.flushBuffer(ErrorPageFilter.java:311)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176)
at org.springframework.security.web.util.OnCommittedResponseWrapper.flushBuffer(OnCommittedResponseWrapper.java:158)
at org.springframework.http.server.ServletServerHttpResponse.flush(ServletServerHttpResponse.java:96)
at org.springframework.web.socket.sockjs.transport.session.AbstractHttpSockJsSession.writeFrameInternal(AbstractHttpSockJsSes
sion.java:350)
at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession.writeFrame(AbstractSockJsSession.java:322)
... 10 common frames omitted
Caused by: java.io.IOException: APR error: -32
at org.apache.coyote.http11.InternalAprOutputBuffer.writeToSocket(InternalAprOutputBuffer.java:291)
at org.apache.coyote.http11.InternalAprOutputBuffer.writeToSocket(InternalAprOutputBuffer.java:244)
at org.apache.coyote.http11.InternalAprOutputBuffer.flushBuffer(InternalAprOutputBuffer.java:213)
at org.apache.coyote.http11.AbstractOutputBuffer.flush(AbstractOutputBuffer.java:305)
at org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:765)
at org.apache.coyote.Response.action(Response.java:177)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:349)
... 22 common frames omitted
I am trying to use Spring Integration to poll RSS feeds and I'm using SO user feed URL as an example. However when the application runs I get the following error:
Disconnected from the target VM, address: '127.0.0.1:58603', transport: 'socket'
2016-03-04 19:34:36.252 ERROR 2345 --- [ask-scheduler-4] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Failed to retrieve feed at url 'http://stackoverflow.com/feeds/user/813852'; nested exception is com.rometools.fetcher.FetcherException: Authentication required for that resource. HTTP Response code was:403
at org.springframework.integration.feed.inbound.FeedEntryMessageSource.getFeed(FeedEntryMessageSource.java:216)
at org.springframework.integration.feed.inbound.FeedEntryMessageSource.populateEntryList(FeedEntryMessageSource.java:182)
at org.springframework.integration.feed.inbound.FeedEntryMessageSource.doReceive(FeedEntryMessageSource.java:157)
at org.springframework.integration.feed.inbound.FeedEntryMessageSource.receive(FeedEntryMessageSource.java:122)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:175)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:224)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:57)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:176)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:173)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:330)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:51)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:324)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.rometools.fetcher.FetcherException: Authentication required for that resource. HTTP Response code was:403
at com.rometools.fetcher.impl.AbstractFeedFetcher.throwAuthenticationError(AbstractFeedFetcher.java:183)
at com.rometools.fetcher.impl.AbstractFeedFetcher.handleErrorCodes(AbstractFeedFetcher.java:170)
at com.rometools.fetcher.impl.HttpURLFeedFetcher.retrieveAndCacheFeed(HttpURLFeedFetcher.java:186)
at com.rometools.fetcher.impl.HttpURLFeedFetcher.retrieveFeed(HttpURLFeedFetcher.java:140)
at com.rometools.fetcher.impl.HttpURLFeedFetcher.retrieveFeed(HttpURLFeedFetcher.java:99)
at org.springframework.integration.feed.inbound.FeedEntryMessageSource.getFeed(FeedEntryMessageSource.java:204)
... 22 more
The URL loads fine through the browser (even in incognito mode so the error message is probably misleading). I downloaded an RSS reader and the feed worked fine.
This is how I configured the feed:
<int-feed:inbound-channel-adapter id="stackOverflow"
channel="log"
url="http://stackoverflow.com/feeds/user/813852">
<int:poller fixed-rate="10000" max-messages-per-poll="100" />
</int-feed:inbound-channel-adapter>
<int:logging-channel-adapter id="log" level="DEBUG" />
I also tried another RSS feed URL and it worked. Any ideas what could be wrong?
I just ran a wireshark trace and it looks like StackOverflow doesn't like the java user agent:
User-Agent: Java/1.8.0_66\r\n
...
<p>The owner of this website (stackoverflow.com) has banned your access based
on your browser's signature (27e797571e1923de-ua21).</p>\n
Setting a custom user agent goes around this restriction:
public class CustomFeedFetcher extends HttpURLFeedFetcher {
#Override
public void setUserAgent(String s) {
super.setUserAgent("AgentName");
}
}
I've been following the instructions here
and I've completed step 1, the installation of the plugin. I'm now trying to add a snapshot by executing the following command in sense:
PUT /_snapshot/ElasticSearch
{
"type": "s3",
"settings": {
"bucket": "SomeBucket",
"base_path" : "ElasticSearch",
"secret_key": "xxx",
"acess_key" : "xxx"
}
}
From this I get the following error:
{
"error": "RepositoryException[[ElasticSearch] failed to create repository]; nested: CreationException[Guice creation errors:\n\n1) Error injecting constructor, java.lang.NoClassDefFoundError: org/elasticsearch/common/blobstore/ImmutableBlobContainer\n at org.elasticsearch.repositories.s3.S3Repository.<init>(Unknown Source)\n while locating org.elasticsearch.repositories.s3.S3Repository\n while locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: NoClassDefFoundError[org/elasticsearch/common/blobstore/ImmutableBlobContainer]; nested: ClassNotFoundException[org.elasticsearch.common.blobstore.ImmutableBlobContainer]; ",
"status": 500
}
Any ideas on how to start troubleshooting this? All the information I get in the logs is:
[2015-07-13 16:20:40,519][WARN ][repositories ] [app1050-ela-dacq] failed to create repository [s3][ElasticSearch -d]
org.elasticsearch.common.inject.CreationException: Guice creation errors:
1) Error injecting constructor, java.lang.NoClassDefFoundError: org/elasticsearch/common/blobstore/ImmutableBlobContainer
at org.elasticsearch.repositories.s3.S3Repository.<init>(Unknown Source)
while locating org.elasticsearch.repositories.s3.S3Repository
while locating org.elasticsearch.repositories.Repository
1 error
at org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:178)
at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:131)
at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:69)
at org.elasticsearch.repositories.RepositoriesService.createRepositoryHolder(RepositoriesService.java:409)
at org.elasticsearch.repositories.RepositoriesService.registerRepository(RepositoriesService.java:373)
at org.elasticsearch.repositories.RepositoriesService.access$100(RepositoriesService.java:57)
at org.elasticsearch.repositories.RepositoriesService$1.execute(RepositoriesService.java:112)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: org/elasticsearch/common/blobstore/ImmutableBlobContainer
at org.elasticsearch.repositories.s3.S3Repository.<init>(S3Repository.java:124)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)
at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
Note that the version of elasticsearch I'm using is 1.6.0 so the plugin version is 2.6.0. The output of the command 'java -version' is the following:
openjdk version "1.8.0_45"
OpenJDK Runtime Environment (build 1.8.0_45-b13)
OpenJDK 64-Bit Server VM (build 25.45-b02, mixed mode)
It turns out that the version of the plugin that had been installed was actually 2.2.0 instead of 2.6.0.
The answer tlrx gives in here is the clue.
It's also good to remember to restart elasticsearch after you've updated the plugin