This is the difficulty that i run into when i tried to launch Jboss6.3 with maven project deployment , whereas the launch was OK without deployment of project. Besides,the project deployed and launched OK on Tomcat7.x.
the message returned is like:
Failed to enable apledi.war.
Unexpected HTTP response: 500
Request
{
"address" => [("deployment" => "apledi.war")],
"operation" => "deploy"
}
Response
Internal Server Error
{
"outcome" => "failed",
"failure-description" => {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"apledi.war\".DEPENDENCIES" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"apledi.war\".DEPENDENCIES: JBAS018733: Failed to process phase DEPENDENCIES of deployment \"apledi.war\"
Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: org.jboss.as.server.deployment.DeploymentUnitProcessingException: java.lang.NullPointerException
Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: java.lang.NullPointerException
Caused by: java.lang.NullPointerException"}},
"rolled-back" => true
}
This happens when you have different deployment units, but you didn't specify a deployment unitName on your #PersistenceContext. So check, if all your entity managers are annotated properly, example:
#PersistenceContext(unitName = "yourUnitName")
private EntityManager entityManager;
Related
I’m trying to connect my SonarLint Plugin with my SonarQube Server, but when i try to update bindings, this error appear in the SonarLint Output:
Failed to read file: /home/valentina/.sonarlint/storage/38364531464134442d41594f79334277794d7331626178675137786850/projects/486f7473706f74/project_branches.pb
...
Caused by: org.sonarsource.sonarlint.core.serverconnection.storage.StorageException: Failed to read file: /home/valentina/.sonarlint/storage/38364531464134442d41594f79334277794d7331626178675137786850/projects/486f7473706f74/project_branches.pb
...
Caused by: java.nio.file.NoSuchFileException: /home/valentina/.sonarlint/storage/38364531464134442d41594f79334277794d7331626178675137786850/projects/486f7473706f74/project_branches.pb
...
Settings file of SonarLint is the following:
{
"sonarlint.connectedMode.connections.sonarqube": [
{
"serverUrl": "http://localhost:9000",
"connectionId": "86E1FA4D-AYOy3BwyMs1baxgQ7xhP",
"token": "squ_5e74e1ff17a56c1ae026df4663503bd1369f9966"
}
],
"sonarlint.connectedMode.project": {
"connectionId": "86E1FA4D-AYOy3BwyMs1baxgQ7xhP",
"projectKey": "Hotspot"
}
}
package mock
import com.intuit.karate.gatling.PreDef._
import io.gatling.core.Predef._
import scala.concurrent.duration._
class CatsKarateSimulation extends Simulation {
MockUtils.startServer()
val feeder = Iterator.continually(Map("catName" -> MockUtils.getNextCatName))
val protocol = karateProtocol(
"/cats/{id}" -> Nil,
"/cats" -> pauseFor("get" -> 15, "post" -> 25)
)
protocol.nameResolver = (req, ctx) => req.getHeader("karate-name")
val create = scenario("create").feed(feeder).exec(karateFeature("classpath:mock/cats-create.feature"))
val delete = scenario("delete").group("delete cats") {
exec(karateFeature("classpath:mock/cats-delete.feature#name=delete"))
}
val custom = scenario("custom").exec(karateFeature("classpath:mock/custom-rpc.feature"))
setUp(
create.inject(rampUsers(10) during (5 seconds)).protocols(protocol),
delete.inject(rampUsers(5) during (5 seconds)).protocols(protocol),
custom.inject(rampUsers(10) during (5 seconds)).protocols(protocol)
)
}
Can anyone help why I am getting this error:
Currently my project has karate api scripts and there are few java helper classes in it, i need to implement the Gatling with gradle.
Java: 11
Gradle: Gradle 7.5.1
Windows 10
Eclipse IDE
build.gradle
enter image description here
Simulation
enter image description here
Folder structure
enter image description here
Error
15:50:51.523 [main] ERROR io.gatling.app.Gatling$ - Run crashed java.lang.IllegalArgumentException: User defined Simulation class mock.CatsKarateSimulation could not be loaded at io.gatling.app.Selection$Selector.findUserDefinedSimulationInClassloader$1(Selection.scala:87) at io.gatling.app.Selection$Selector.$anonfun$singleSimulationFromConfig$4(Selection.scala:91) at scala.Option.orElse(Option.scala:477) at io.gatling.app.Selection$Selector.$anonfun$singleSimulationFromConfig$3(Selection.scala:91) at scala.Option.flatMap(Option.scala:283) at io.gatling.app.Selection$Selector.singleSimulationFromConfig(Selection.scala:90) at io.gatling.app.Selection$Selector.$anonfun$selection$1(Selection.scala:52) at scala.Option.getOrElse(Option.scala:201) at io.gatling.app.Selection$Selector.selection(Selection.scala:44) at io.gatling.app.Selection$.apply(Selection.scala:36) at io.gatling.app.Runner.run0(Runner.scala:61) at io.gatling.app.Runner.run(Runner.scala:49) at io.gatling.app.Gatling$.start(Gatling.scala:87) at io.gatling.app.Gatling$.fromArgs(Gatling.scala:49) at io.gatling.app.Gatling$.main(Gatling.scala:38) at io.gatling.app.Gatling.main(Gatling.scala) Caused by: java.lang.ClassNotFoundException: mock.CatsKarateSimulation at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) at java.base/java.lang.Class.forName0(Native Method) at java.base/java.lang.Class.forName(Class.java:315) at io.gatling.app.Selection$Selector.$anonfun$singleSimulationFromConfig$2(Selection.scala:76) at scala.util.Try$.apply(Try.scala:210) at io.gatling.app.Selection$Selector.findUserDefinedSimulationInClassloader$1(Selection.scala:76) ... 15 common frames omitted Exception in thread "main" java.lang.IllegalArgumentException: User defined Simulation class mock.CatsKarateSimulation could not be loaded at io.gatling.app.Selection$Selector.findUserDefinedSimulationInClassloader$1(Selection.scala:87) at io.gatling.app.Selection$Selector.$anonfun$singleSimulationFromConfig$4(Selection.scala:91) at scala.Option.orElse(Option.scala:477) at io.gatling.app.Selection$Selector.$anonfun$singleSimulationFromConfig$3(Selection.scala:91) at scala.Option.flatMap(Option.scala:283) at io.gatling.app.Selection$Selector.singleSimulationFromConfig(Selection.scala:90) at io.gatling.app.Selection$Selector.$anonfun$selection$1(Selection.scala:52) at scala.Option.getOrElse(Option.scala:201) at io.gatling.app.Selection$Selector.selection(Selection.scala:44) at io.gatling.app.Selection$.apply(Selection.scala:36) at io.gatling.app.Runner.run0(Runner.scala:61) at io.gatling.app.Runner.run(Runner.scala:49) at io.gatling.app.Gatling$.start(Gatling.scala:87) at io.gatling.app.Gatling$.fromArgs(Gatling.scala:49) at io.gatling.app.Gatling$.main(Gatling.scala:38) at io.gatling.app.Gatling.main(Gatling.scala) Caused by: java.lang.ClassNotFoundException: mock.CatsKarateSimulation at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) at java.base/java.lang.Class.forName0(Native Method) at java.base/java.lang.Class.forName(Class.java:315) at io.gatling.app.Selection$Selector.$anonfun$singleSimulationFromConfig$2(Selection.scala:76) at scala.util.Try$.apply(Try.scala:210) at io.gatling.app.Selection$Selector.findUserDefinedSimulationInClassloader$1(Selection.scala:76)
I got the following error while creating the elasticsinkconnector.
CREATE SOURCE CONNECTOR testdemosinkconnector WITH(
"type.name"= '_doc',
"input.data.format"= 'AVRO',
"connector.class"= 'io.confluent.connect.elasticsearch.ElasticsearchSinkConnector',
"tasks.max"= '1',
"transforms"= 'Dealership',
"topics"= 'es.contact.model',
"transforms.Dealership.type"= 'io.confluent.connect.transforms.ExtractTopic$Value',
"transforms.Dealership.field"= 'indexTopicName',
"transforms.Dealership.skip.missing.or.null"= 'true',
"connection.url"= 'https://elasticsearchdemo.es.us-central1.gcp.cloud.es.io:9243',
"connection.username"= 'elastic',
"connection.password"= 'BUgBxOBg3dv4jp4Z3W7p4tHC',
"key.ignore"= 'true',
"value.converter"= 'io.confluent.connect.avro.AvroConverter',
"value.converter.schemas.enable"= 'true',
"value.converter.schema.registry.url"= 'http://localhost:8081',
"bulk.size.bytes"= '-1',
"behavior.on.null.values"= 'IGNORE',enter code here
"behavior.on.malformed.documents"= 'IGNORE',
"max.retries"= '5',
"retry.backoff.ms"= '5000'
);
The error is,
FAILED | org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:618)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:334)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:235)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:204)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:200)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:255)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.ConnectException: Bulk request failed
at io.confluent.connect.elasticsearch.ElasticsearchClient$1.afterBulk(ElasticsearchClient.java:397)
at org.elasticsearch.action.bulk.BulkRequestHandler$1.onFailure(BulkRequestHandler.java:70)
at org.elasticsearch.action.ActionListener$5.onFailure(ActionListener.java:258)
at org.elasticsearch.action.bulk.Retry$RetryHandler.onFailure(Retry.java:126)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$1(ElasticsearchClient.java:174)
... 5 more
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to execute bulk request due to 'java.io.IOException: Unable to parse response body for Response{requestLine=POST /_bulk?timeout=1m HTTP/1.1, host=https://elasticsearchdemo.es.us-central1.gcp.cloud.es.io:9243, response=HTTP/1.1 200 OK}' after 6 attempt(s)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:165)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:119)
at io.confluent.connect.elasticsearch.ElasticsearchClient.callWithRetries(ElasticsearchClient.java:425)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$1(ElasticsearchClient.java:168)
... 5 more
Caused by: java.io.IOException: Unable to parse response body for Response{requestLine=POST /_bulk?timeout=1m HTTP/1.1, host=https://elasticsearchdemo.es.us-central1.gcp.cloud.es.io:9243, response=HTTP/1.1 200 OK}
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1632)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1583)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1553)
at org.elasticsearch.client.RestHighLevelClient.bulk(RestHighLevelClient.java:533)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$0(ElasticsearchClient.java:170)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:158)
... 8 more
Caused by: java.lang.NullPointerException
at java.base/java.util.Objects.requireNonNull(Objects.java:221)
at org.elasticsearch.action.DocWriteResponse.<init>(DocWriteResponse.java:127)
at org.elasticsearch.action.index.IndexResponse.<init>(IndexResponse.java:54)
at org.elasticsearch.action.index.IndexResponse.<init>(IndexResponse.java:39)
at org.elasticsearch.action.index.IndexResponse$Builder.build(IndexResponse.java:107)
at org.elasticsearch.action.index.IndexResponse$Builder.build(IndexResponse.java:104)
at org.elasticsearch.action.bulk.BulkItemResponse.fromXContent(BulkItemResponse.java:159)
at org.elasticsearch.action.bulk.BulkResponse.fromXContent(BulkResponse.java:196)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1892)
at org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAndParseEntity$8(RestHighLevelClient.java:1554)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1630)
13 more
Please help me to resolve this error.
Elastic Sink Connector Version : 11.1.10
Elastic Search Version : 8.2.2
Elasticsearch version 8 is not supported by the Confluent Elasticsearch Sink Connector Version 11.1.10 so most likely that it is why it can't parse the Elasticsearch response properly
As of version 11.0.0, the connector uses the Elasticsearch High Level REST Client (version 7.0.1), which means only Elasticsearch 7.x is supported.
https://docs.confluent.io/kafka-connect-elasticsearch/current/overview.html
we are exploring fluentd, kafka and elasticsearch integration for our central logging platform.
at present we have individual topic for each deployments. for example if we have deployments customerA, customerB, customerC then we have separate topic on Kafka cluster for these deployments customerA, customerB, customerC .
as long as we push json logs to these topic our Elasticsearch sink connectors work fine but for log format other json tends to break connector task with error :
[2021-08-01 17:03:27,338] ERROR WorkerSinkTask{id=central-logs-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:475)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:239)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:366)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
... 13 more
Caused by: org.apache.kafka.common.errors.SerializationException: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'status': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (byte[])"status-task-central-logs-3"; line: 1, column: 8]
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'status': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (byte[])"status-task-central-logs-3"; line: 1, column: 8]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3560)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2655)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:857)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:754)
at com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4247)
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2734)
at org.apache.kafka.connect.json.JsonDeserializer.deserialize(JsonDeserializer.java:64)
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:364)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:475)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:239)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
and our Elasticsearch sink connector config is :
{
"name":"central-logs",
"config":{
"connector.class":"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"type.name":"_doc",
"transforms.AddSuffix.topic.format":"${topic}-${timestamp}",
"tasks.max":"16",
"max.retries":"30",
"retry.backoff.ms":"10000",
"topics.regex": "kafka-(.*)",
"transforms.ConvertTimestamp.format":"yyyy-MM-dd HH:mm:ss",
"transforms":"AddSuffix,InsertTimestamp,ConvertTimestamp",
"transforms.ConvertTimestamp.type":"org.apache.kafka.connect.transforms.TimestampConverter$Value",
"key.ignore":"true",
"schema.ignore":"true",
"transforms.AddSuffix.timestamp.format":"yyyy.MM.dd",
"key.converter.schemas.enable":"false",
"transforms.AddSuffix.type":"org.apache.kafka.connect.transforms.TimestampRouter",
"value.converter.schemas.enable":"false",
"transforms.InsertTimestamp.type":"org.apache.kafka.connect.transforms.InsertField$Value",
"name":"central-logs",
"connection.url":"https://es",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"transforms.InsertTimestamp.timestamp.field":"#timestamp",
"transforms.ConvertTimestamp.target.type":"string",
"read.timeout.ms":"30000",
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"transforms.ConvertTimestamp.field":"#timestamp"
}
}
is there a way to avoid such connector/task failures?
You have set the connector's converter to expect JSON:
"value.converter":"org.apache.kafka.connect.json.JsonConverter"
So any message in these topics which is not a valid JSON will fail when the JSON parser processes it. Is this parsing actually required? Looks like the ElasticSearch sink connector example does not do this.
Try removing this configuration.
Hi I am trying to upload multiple files into artifactory repo using my Jenkinsfile, Please find the snippet and error pasted below:
Artifactory Plug in version - 2.16.2 and Jenkins 2.138.2
snippet from Jenkinsfile and message from the logs
def uploadSpec = """{
"files": [
{
"pattern": "folder/test.py",
"target": "reponame-local/folder/"
}
]
}"""
def buildInfo1 = server.upload spec: uploadSpec
I get the below error pasted all logs here Invalid object ID
SEVERE: Failed to execute command Pipe.Flush(-1) (channel JNLP4-connect connection from 10.37.88.189/10.37.88.189:41732)
java.util.concurrent.ExecutionException: Invalid object ID -1 iota=25377
at hudson.remoting.ExportTable.diagnoseInvalidObjectId(ExportTable.java:478)
at hudson.remoting.ExportTable.get(ExportTable.java:397)
at hudson.remoting.Channel.getExportedObject(Channel.java:780)
at hudson.remoting.ProxyOutputStream$Flush.execute(ProxyOutputStream.java:307)
at hudson.remoting.Channel$1.handle(Channel.java:565)
at hudson.remoting.AbstractByteBufferCommandTransport.processCommand(AbstractByteBufferCommandTransport.java:203)
at hudson.remoting.AbstractByteBufferCommandTransport.receive(AbstractByteBufferCommandTransport.java:189)
at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onRead(ChannelApplicationLayer.java:187)
at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecv(ApplicationLayer.java:207)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:669)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processRead(SSLEngineFilterLayer.java:369)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecv(SSLEngineFilterLayer.java:117)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:669)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRead(NetworkLayer.java:136)
at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:160)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Object appears to be deallocated at lease before Sun Nov 18 20:34:02 MST 2018
at hudson.remoting.ExportTable.diagnoseInvalidObjectId(ExportTable.java:474)
java.lang.Exception: Error occurred during operation, please refer to logs for more information.
at org.jfrog.build.extractor.producerConsumer.ProducerConsumerExecutor.start(ProducerConsumerExecutor.java:84)
at org.jfrog.build.extractor.clientConfiguration.util.spec.SpecsHelper.uploadArtifactsBySpec(SpecsHelper.java:71)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:190)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from xx.xx.xxx.xx/xx.xx.xxx.xx:41732
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1071)
at hudson.FilePath.act(FilePath.java:1060)
at org.jfrog.hudson.pipeline.executors.GenericUploadExecutor.execution(GenericUploadExecutor.java:52)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:65)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:46)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
at hudson.security.ACL.impersonate(ACL.java:290)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Caused: java.lang.RuntimeException: Failed uploading artifacts by spec
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:194)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:131)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3085)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
at java.lang.Thread.run(Thread.java:748)
It seems wontedly created issue by infra. Please try with user having write access and these will only can resolved by Infra team.