PrematureCloseException: Connection prematurely closed DURING response When down load with azure blob - azure-blob-storage

I try to download the large file from azure blob storage by these line of code:
BlobClient blobAsyncClient = new BlobServiceClientBuilder().connectionString(connectionString1)
.buildClient()
.getBlobContainerClient(blob)
.getBlobClient(client);
blobAsyncClient.downloadToFile(path);
And i got this exception:
reactor.core.Exceptions$ReactiveException: reactor.netty.http.client.HttpClientOperations$PrematureCloseException: Connection prematurely closed DURING response
at reactor.core.Exceptions.propagate(Exceptions.java:326)
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:91)
at reactor.core.publisher.Mono.block(Mono.java:1500)
at com.azure.storage.common.implementation.StorageImplUtils.blockWithOptionalTimeout(StorageImplUtils.java:99)
at com.azure.storage.blob.specialized.BlobClientBase.downloadToFileWithResponse(BlobClientBase.java:557)
at com.gooddata.cloudresource.service.connection.BlobStorageDataSourceService.validateConnection(BlobStorageDataSourceService.java:89)
at com.gooddata.cloudresource.model.worker.CloudConnectionValidationWorker.lambda$doProcessTask$1(CloudConnectionValidationWorker.java:55)
at java.util.Optional.ifPresent(Optional.java:159)
at com.gooddata.cloudresource.model.worker.CloudConnectionValidationWorker.doProcessTask(CloudConnectionValidationWorker.java:55)
at com.gooddata.cloudresource.model.worker.CloudConnectionValidationWorker.doProcessTask(CloudConnectionValidationWorker.java:33)
at com.gooddata.gcf.worker.AbstractWorker.processTask(AbstractWorker.java:27)
at com.gooddata.gcf.http.TaskHolder.call(TaskHolder.java:52)
at com.gooddata.gcf.http.TaskHolder.call(TaskHolder.java:26)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.Exception: #block terminated with an error
After changing downloadToFileWithResponse with ParallelTransferOptions has block size is 0.5MB for each chunk some times its works.
Can someone help tell me why it happened?
Do we have any stable solution for this?

Please check with this link
If you are downloading large file from Azure blob you could try to use Shared Access Signature (SAS), or serve the file in chunk
To serve the file in chunk please check with this thread

Related

How to represent or monitor java.io.InterruptedIOException timeout?

We are getting the Caused by: java.io.InterruptedIOException: timeout exception in the logs from the server. However, the server is not giving the response code to us.
I am looking for the standard practice for the timeout monitoring to be followed in Splunk or Appdynamics to plot the graph for number of timeouts being received per second.
Shall we add the error code like 408 in the exception at client side or we should plot the graph for on basis of text "timeout" count with over the time.
Exception Logs
java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at lambdainternal.LambdaRTEntry.main(LambdaRTEntry.java:150)
Caused by: java.io.InterruptedIOException: timeout
at okhttp3.internal.connection.Transmitter.timeoutExit(Transmitter.kt:104)
at okhttp3.internal.connection.Transmitter.maybeReleaseConnection(Transmitter.kt:293)
at okhttp3.internal.connection.Transmitter.noMoreExchanges(Transmitter.kt:257)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.kt:192)
at okhttp3.RealCall.execute(RealCall.kt:66)
For AppDynamics the ideal solution would be for a "bad error code" to be returned from the server - this would cause an error to be detected (and mark any associated Business Transaction as being in error) - see https://docs.appdynamics.com/22.2/en/application-monitoring/troubleshooting-applications/errors-and-exceptions#ErrorsandExceptions-BusinessTransactionError
Else you can use Custom Error Configuration to set a logger which signals errors - see https://docs.appdynamics.com/22.2/en/application-monitoring/configure-instrumentation/error-detection#ErrorDetection-ErrorDetectionConfiguration
Else you can capture values using a Data Collector and then use these in Analytics to break out errors - see https://docs.appdynamics.com/22.2/en/application-monitoring/configure-instrumentation/data-collectors + https://docs.appdynamics.com/22.2/en/analytics/configure-analytics/collect-transaction-analytics-data

JanusGraph can not connect to ElasticSearch after Cluster Name change

I'm trying to instantiate JanusGraph with the following configuration, using Cassandra as storage backend and ElasticSearch as indexing backend:
JanusGraph graph = JanusGraphFactory.build()
.set("storage.backend", "cassandra")
.set("storage.hostname", "localhost")
.set("cache.db-cache", true)
.set("schema.default", "none")
.set("index.search.backend", "elasticsearch")
.set("index.search.elasticsearch.client-only", "false")
.set("index.search.elasticsearch.local-mode", "true")
.open();
The above code works if cassandra's cluser is named Test Cluster. If I rename it to something else, an exception is thrown:
java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:69)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:477)
at org.janusgraph.diskstorage.Backend.getIndexes(Backend.java:464)
at org.janusgraph.diskstorage.Backend.<init>(Backend.java:149)
at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1850)
at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:134)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:107)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:97)
at org.janusgraph.core.JanusGraphFactory$Builder.open(JanusGraphFactory.java:152)
at engineering.divine.core.GraphFactory.cassandraGraph(GraphFactory.java:403)
at engineering.divine.core.GraphFactory.graph(GraphFactory.java:298)
at engineering.divine.core.GraphFactory.getDefault(GraphFactory.java:99)
at engineering.divine.repository.Repository.listRepositoriesToUpdate(Repository.java:130)
at engineering.divine.daemon.RepositoryAnalysisDaemon.run(RepositoryAnalysisDaemon.java:24)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
... 20 more
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:279)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:198)
at org.elasticsearch.client.transport.support.InternalTransportClusterAdminClient.execute(InternalTransportClusterAdminClient.java:86)
at org.elasticsearch.client.support.AbstractClusterAdminClient.health(AbstractClusterAdminClient.java:127)
at org.elasticsearch.action.admin.cluster.health.ClusterHealthRequestBuilder.doExecute(ClusterHealthRequestBuilder.java:92)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:215)
... 25 more
How can I make elasticsearch work with my new cluster name?
Using Max OS X 10.11.6, any pointers are highly appreciated.
Reset your data, if it is for testing purpose
Clear all your data from the storage backend (Cassandra)
Restart all the janusgraph nodes
In JanusGraph
Each configuration option has a certain mutability level that governs whether and how it can be modified after the database is opened for the first time. The following listing describes the mutability levels.
FIXED
Once the database has been opened, these configuration options cannot be changed for the entire life of the database
GLOBAL_OFFLINE
These options can only be changed for the entire database cluster at once when all instances are shut down
GLOBAL
These options can only be changed globally across the entire database cluster
MASKABLE
These options are global but can be overwritten by a local configuration file
LOCAL
These options can only be provided through a local configuration file
You can get the Mutability Level of any configuration from the below link
Source : http://docs.janusgraph.org/latest/config-ref.html

Hadoop Job throws java.io.IOException: Attempted read from closed stream

I'm running a simple map-reduce job. This job uses 250 files from common crawl data.
e.g. s3://aws-publicdatasets/common-crawl/parse-output/segment/1341690169105/
If I use, 50, 100 files, everything works OK. But with 250 files I get this error
java.io.IOException: Attempted read from closed stream.
at org.apache.commons.httpclient.ContentLengthInputStream.read(ContentLengthInputStream.java:159)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at org.apache.commons.httpclient.AutoCloseInputStream.read(AutoCloseInputStream.java:107)
at org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:76)
at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:136)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:111)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.io.DataInputStream.readByte(DataInputStream.java:248)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1707)
at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1773)
at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:1849)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper$SubMapRecordReader.nextKeyValue(MultithreadedMapper.java:180)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper$MapRunner.run(MultithreadedMapper.java:268)
Any clues?
How many map slots do you have to process the input? Is it close to 100?
This is a guess, but its possible that the connection to S3 is timing out whilst you process the first batch of files and as slots become available to process further files the connection is no longer open. I believe timeout errors from NativeS3FileSystem show up as IOExceptions.

Can't connect to mongohq from heroku in Play 2.0 java?

I am using mongohq as a heroku add on, and I have recently encountered some problems running my app on my local host. I keep getting the following stack trace as soon as my app tries to go to the database.
play.core.ActionInvoker$$anonfun$receive$1$$anon$1: Execution exception [[Network: can't call something : staff.mongohq.com/50.17.135.240:10050/app4620908]]
at play.core.ActionInvoker$$anonfun$receive$1.apply(Invoker.scala:82) [play_2.9.1.jar:2.0]
at play.core.ActionInvoker$$anonfun$receive$1.apply(Invoker.scala:63) [play_2.9.1.jar:2.0]
at akka.actor.Actor$class.apply(Actor.scala:290) [akka-actor.jar:2.0]
at play.core.ActionInvoker.apply(Invoker.scala:61) [play_2.9.1.jar:2.0]
at akka.actor.ActorCell.invoke(ActorCell.scala:617) [akka-actor.jar:2.0]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:179) [akka-actor.jar:2.0]
Caused by: com.mongodb.MongoException$Network: can't call something : staff.mongohq.com/50.17.135.240:10050/app4620908
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:227) ~[mongo-java-driver-2.7.3.jar:na]
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:305) ~[mongo-java-driver-2.7.3.jar:na]
at com.mongodb.DB.command(DB.java:160) ~[mongo-java-driver-2.7.3.jar:na]
at com.mongodb.DB.command(DB.java:183) ~[mongo-java-driver-2.7.3.jar:na]
at com.mongodb.DBCollection.getCount(DBCollection.java:864) ~[mongo-java-driver-2.7.3.jar:na]
at com.mongodb.DBCollection.getCount(DBCollection.java:835) ~[mongo-java-driver-2.7.3.jar:na]
Caused by: java.net.SocketException: Operation timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.6.0_31]
at java.net.SocketInputStream.read(SocketInputStream.java:129) ~[na:1.6.0_31]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) ~[na:1.6.0_31]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:258) ~[na:1.6.0_31]
at java.io.BufferedInputStream.read(BufferedInputStream.java:317) ~[na:1.6.0_31]
at org.bson.io.Bits.readFully(Bits.java:35) ~[mongo-java-driver-2.7.3.jar:na]
I am using the Mongo driver directly, and have made sure to create only a singleton Mongo instance, and I have verified it is being created. As far as I can tell it is occuring only when I am requesting data
When authenticating a user, it occurs on the second of these two lines:
DBCollection users = DBManager.getDB("mojulo").getCollection("users"); /*Get the Mongo singleton then get the "users" collection */
DBObject user = users.findOne(new BasicDBObject("username", username)); /*find the user with the specified username */
When trying to register a user, it occurs after I attempt the insert, and the insert is not successful. on the second of these two lines:
WriteResult result = users.insert(new_user); /* attempt to insert a new user */
if(result.getLastError().ok()){ /* make sure it worked, error occurs on this line */
...
I am fairly clueless here, I feel that there might be a threading issue? because it randomly worked once, but then stopped working. Any light someone could shed on this would be greatly appreciated
EDIT:
Wish I could more explicitly state the problems I am having here, but they are very sporadic. It seems to work maybe 10% of the time, and then stop working.

com.ibm.websphere.jtaextensions.NotSupportedException thrown under load

I have an application containing 4 MDB's each of which receives SOAP messages over JMS from MQ. Once the messages have been received we process the XML into an object model and process accordingly which always involves either loading or saving messages to an Oracle database via Hibernate.
Additionally we have a quartz process with fires every minute that may or may not trigger so actions which could also read or write to the database using Hibernate.
When the system in under high load, i.e. processing large numbers 1k + and potentially performing some database read/writes triggered by our quartz process we keep seeing the following exception be thrown in our logs.
===============================================================================
at com.integrasp.iatrade.logic.MessageContextRouting.lookup(MessageContextRouting. java:150)
at com.integrasp.iatrade.logic.RequestResponseManager.findRequestDestination(Reque stResponseManager.java:153) at com.integrasp.iatrade.logic.RequestResponseManager.findRequestDestination(Reque stResponseManager.java:174)
at com.integrasp.iatrade.logic.IOLogic.processResponse(IOLogic.java:411)< br /> at com.integrasp.iatrade.logic.FxOrderQuoteManager.requestQuote(FxOrderQuoteManage r.java:119)
at com.integrasp.iatrade.logic.FxOrderQuoteManager.processRequest(FxOrderQuoteMana ger.java:682)
at com.integrasp.iatrade.logic.FxOrderSubmissionManager.processRequest(FxOrderSubm issionManager.java:408)
at com.integrasp.iatrade.eo.SubmitOrderRequest.process(SubmitOrderRequest.java:60)
at com.integrasp.iatrade.ejb.BusinessLogicRegister.perform(BusinessLogicRegister.j ava:85)
at com.integrasp.iatrade.ejb.mdb.OrderSubmissionBean.onMessage(OrderSubmissionBean .java:147)
at com.ibm.ejs.jms.listener.MDBWrapper$PriviledgedOnMessage.run(MDBWrapper.java:30 2)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:63 )
at com.ibm.ejs.jms.listener.MDBWrapper.callOnMessage(MDBWrapper.java:271)
at com.ibm.ejs.jms.listener.MDBWrapper.onMessage(MDBWrapper.java:240)
at com.ibm.mq.jms.MQSession.run(MQSession.java:1593)
at com.ibm.ejs.jms.JMSSessionHandle.run(JMSSessionHandle.java:970)
at com.ibm.ejs.jms.listener.ServerSession.connectionConsumerOnMessage(ServerSessio n.java:891)
at com.ibm.ejs.jms.listener.ServerSession.onMessage(ServerSession.java:656)
at com.ibm.ejs.jms.listener.ServerSession.dispatch(ServerSession.java:623)
at sun.reflect.GeneratedMethodAccessor79.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:43)
at java.lang.reflect.Method.invoke(Method.java:615)
at com.ibm.ejs.jms.listener.ServerSessionDispatcher.dispatch(ServerSessionDispatch er.java:37)
at com.ibm.ejs.container.MDBWrapper.onMessage(MDBWrapper.java:96)
at com.ibm.ejs.container.MDBWrapper.onMessage(MDBWrapper.java:132)
at com.ibm.ejs.jms.listener.ServerSession.run(ServerSession.java:481)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1473)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:43)
at java.lang.reflect.Method.invoke(Method.java:615)
at org.hibernate.transaction.WebSphereExtendedJTATransactionLookup$TransactionMana gerAdapter$TransactionAdapter.registerSynchronization(WebSphereExtendedJTATransa ctionLookup.java:225)
... 30 more
Caused by: com.ibm.websphere.jtaextensions.NotSupportedException
at com.ibm.ws.jtaextensions.ExtendedJTATransactionImpl.registerSynchronizationCall backForCurrentTran(ExtendedJTATransactionImpl.java:247)
... 34 more
Could any body help to shed come light on what com.ibm.websphere.jtaextensions.NotSupportedException means. The IBM documentation says
"The exception is thrown by the transaction manager if an attempt is made to register a SynchronizationCallback in an environment or at a time when this function is not available. "
Which to me sounds like the container is rejecting hibernates call to start a transaction. If anybody has any idea on why the container could be throwing the message please let me know.
Thanks in advance
Karl
If you really need high load I would remove the Hibernate layer between your app and the database. Without Hibernate you have less moving parts and more control.
That is the only advice I can give you.
If anyone was interested it was a thread that was trying to sync the transaction when the transaction had timed out.
I had assumed that if the transaction timeout then the thread would have been killed however this was not the case.
karl

Resources