Apache Storm - Kinesis Spout throwing AmazonClientException backing off - apache-storm

2016-02-02 16:15:18 c.a.s.k.s.u.InfiniteConstantBackoffRetry [DEBUG] Caught exception of type com.amazonaws.AmazonClientException, backing off for 1000 ms.
I tested GET and PUT using Streams and Get requests - both worked flawless. I have all 3 variants Batch, Storm and Spark. Spark - used KinesisStreams - working Batch: Can you Get and Put - working Storm: planning to use KinesisSpout library from Kinesis. It is failing with no clue.
final KinesisSpoutConfig config = new KinesisSpoutConfig(streamname, zookeeperurl);
config.withInitialPositionInStream(ipis);
config.withRegion(Regions.fromName(regionName));
config.withCheckpointIntervalMillis(Integer.parseInt(checkinterval));
config.withZookeeperPrefix("kinesis-zooprefix-" + name);
System.setProperty("aws.accessKeyId", key);
System.setProperty("aws.secretKey", keysecret);
SystemPropertiesCredentialsProvider scp = new SystemPropertiesCredentialsProvider();
final KinesisSpout spout = new KinesisSpoutConflux(config, scp, new ClientConfiguration());
What am I doing wrong?
Storm Logs:
2016-02-02 16:15:17 c.a.s.k.s.KinesisSpout [INFO] KinesisSpoutConflux[taskIndex=0] open() called with topoConfig task index 0 for processing stream Kinesis-Conflux
2016-02-02 16:15:17 c.a.s.k.s.KinesisSpout [DEBUG] KinesisSpoutConflux[taskIndex=0] activating. Starting to process stream Kinesis-Test
2016-02-02 16:15:17 c.a.s.k.s.KinesisHelper [INFO] Using us-east-1 region
I don't see "nextTuple" getting called.
My Versions:
storm = 0.9.3
kinesis-storm-spout = 1.1.1

Related

HDP 315 | Hive DDL Query issue

Installed HDP 3.1.5, and enabled KERBEROS security.
In Hive normal create table is working fine. But when I'm trying to create any role getting below error. Please suggest the solution.
0: jdbc:hive2://host> create role userRole;
INFO : Compiling command(queryId=hive_20200320085236_d9a4f82e-dab8-4952-aa53-da11a1cda4b6): create role userRole
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20200320085236_d9a4f82e-dab8-4952-aa53-da11a1cda4b6); Time taken: 0.021 seconds
INFO : Executing command(queryId=hive_20200320085236_d9a4f82e-dab8-4952-aa53-da11a1cda4b6): create role bdauserRole
INFO : Starting task [Stage-0:DDL] in serial mode
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. createRole not implemented in FallbackHiveAuthorizer
INFO : Completed executing command(queryId=hive_20200320085236_d9a4f82e-dab8-4952-aa53-da11a1cda4b6); Time taken: 0.02 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. createRole not implemented in FallbackHiveAuthorizer (state=08S01,code=1)

Is S3NativeFileSystem call killing my Pyspark Application on AWS EMR 4.6.0

My Spark application is failing when it has to access numerous CSV files (~1000 # 63MB each) from S3, and pipe them into a Spark RDD. The actual process of splitting up the CSV seems to work, but an extra function call to S3NativeFileSystem seems to be causing an error and the job to crash.
To begin, the following is my PySpark Application:
from pyspark import SparkContext
sc = SparkContext("local", "Simple App")
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
import time
startTime = float(time.time())
dataPath = 's3://PATHTODIRECTORY/'
sc._jsc.hadoopConfiguration().set("fs.s3.awsAccessKeyId", "MYKEY")
sc._jsc.hadoopConfiguration().set("fs.s3.awsSecretAccessKey", "MYSECRETKEY")
def buildSchemaDF(tableName, columnList):
currentRDD = sc.textFile(dataPath + tableName).map(lambda line: line.split("|"))
currentDF = currentRDD.toDF(columnList)
return currentDF
loadStartTime = float(time.time())
lineitemDF = buildSchemaDF('lineitem*', ['l_orderkey','l_partkey','l_suppkey','l_linenumber','l_quantity','l_extendedprice','l_discount','l_tax','l_returnflag','l_linestatus','l_shipdate','l_commitdate','l_receiptdate','l_shipinstruct','l_shipmode','l_comment'])
lineitemDF.registerTempTable("lineitem")
loadTimeElapsed = float(time.time()) - loadStartTime
queryStartTime = float(time.time())
qstr = """
SELECT
lineitem.l_returnflag,
lineitem.l_linestatus,
sum(l_quantity) as sum_qty,
sum(l_extendedprice) as sum_base_price,
sum(l_discount) as sum_disc,
sum(l_tax) as sum_tax,
avg(l_quantity) as avg_qty,
avg(l_extendedprice) as avg_price,
avg(l_discount) as avg_disc,
count(l_orderkey) as count_order
FROM
lineitem
WHERE
l_shipdate <= '19981001'
GROUP BY
l_returnflag,
l_linestatus
ORDER BY
l_returnflag,
l_linestatus
"""
tpch1DF = sqlContext.sql(qstr)
queryTimeElapsed = float(time.time()) - queryStartTime
totalTimeElapsed = float(time.time()) - startTime
tpch1DF.show()
queryResults = [qstr, loadTimeElapsed, queryTimeElapsed, totalTimeElapsed]
distData = sc.parallelize(queryResults)
distData.saveAsTextFile(dataPath + 'queryResults.csv')
print 'Load Time: ' + str(loadTimeElapsed)
print 'Query Time: ' + str(queryTimeElapsed)
print 'Total Time: ' + str(totalTimeElapsed)
To take it step by step I start off by spinning up a Spark EMR Cluster with the following AWS CLI command (carriage returns added for readability):
aws emr create-cluster --name "Big TPCH Spark cluster2" --release-label emr-4.6.0
--applications Name=Spark --ec2-attributes KeyName=blazing-test-aws
--log-uri s3://aws-logs-132950491118-us-west-2/elasticmapreduce/j-1WZ39GFS3IX49/
--instance-type m3.2xlarge --instance-count 6 --use-default-roles
After the EMR cluster finishes provisioning I then copy over my Pyspark application onto the master node at '/home/hadoop/pysparkApp.py'. With it copied over I'm able to add the Step for spark-submit.
aws emr add-steps --cluster-id j-1DQJ8BDL1394N --steps
Type=spark,Name=SparkTPCHTests,Args=[--deploy-mode,cluster,-
conf,spark.yarn.submit.waitAppCompletion=true,--num-executors,5,--executor
cores,5,--executor memory,20g,/home/hadoop/tpchSpark.py]
,ActionOnFailure=CONTINUE
Now if I run this step over only a few of the aforementioned CSV files the final results will be generated, but the script will still claim to have failed.
I think it's associated with an extra call to S3NativeFileSystem, but I'm not certain. These are the Yarn log messages I'm getting which lead me to that conclusion. The first call appears to work just fine:
16/05/15 23:18:00 INFO HadoopRDD: Input split: s3://data-set-builder/splitLineItem2/lineitemad:0+64901757
16/05/15 23:18:00 INFO latency: StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[ED8011CE4E1F6F18], ServiceEndpoint=[https://data-set-builder.s3-us-west-2.amazonaws.com], HttpClientPoolLeasedCount=0, RetryCapacityConsumed=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=2, ClientExecuteTime=[77.956], HttpRequestTime=[77.183], HttpClientReceiveResponseTime=[20.028], RequestSigningTime=[0.229], CredentialsRequestTime=[0.003], ResponseProcessingTime=[0.128], HttpClientSendRequestTime=[0.35],
While the second one does not seem to execute properly, resulting in "Partial Results" (206 Error):
16/05/15 23:18:00 INFO S3NativeFileSystem: Opening 's3://data-set-builder/splitLineItem2/lineitemad' for reading
16/05/15 23:18:00 INFO latency: StatusCode=[206], ServiceName=[Amazon S3], AWSRequestID=[10BDDE61AE13AFBE], ServiceEndpoint=[https://data-set-builder.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RetryCapacityConsumed=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=2, Client Execute Time=[296.86], HttpRequestTime=[295.801], HttpClientReceiveResponseTime=[293.667], RequestSigningTime=[0.204], CredentialsRequestTime=[0.002], ResponseProcessingTime=[0.34], HttpClientSendRequestTime=[0.337],
16/05/15 23:18:02 INFO ApplicationMaster: Waiting for spark context initialization ...
I'm lost as to why it's even making the second call to S3NativeFileSystem when the first one appears to have responded effectively and even split the file. Is this something that is a product of my EMR configuration? I know S3Native has file limit issues and that a straight S3 call is optimal, which is what I've tried to do, but this call seems to be there no matter what I do. Please help!
Also, to add a few other error messages in my Yarn Log in case they are relevant.
1)
16/05/15 23:19:22 ERROR ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
16/05/15 23:19:22 INFO ApplicationMaster: Final app status: FAILED, exitCode: 13, (reason: Timed out waiting for SparkContext.)
2)
16/05/15 23:19:22 ERROR DiskBlockObjectWriter: Uncaught exception while reverting partial writes to file /mnt/yarn/usercache/hadoop/appcache/application_1463354019776_0001/blockmgr-f847744b-c87a-442c-9135-57cae3d1f6f0/2b/temp_shuffle_3fe2e09e-f8e4-4e5d-ac96-1538bdc3b401
java.io.FileNotFoundException: /mnt/yarn/usercache/hadoop/appcache/application_1463354019776_0001/blockmgr-f847744b-c87a-442c-9135-57cae3d1f6f0/2b/temp_shuffle_3fe2e09e-f8e4-4e5d-ac96-1538bdc3b401 (No such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at org.apache.spark.storage.DiskBlockObjectWriter.revertPartialWritesAndClose(DiskBlockObjectWriter.scala:162)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.stop(BypassMergeSortShuffleWriter.java:226)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/05/15 23:19:22 ERROR BypassMergeSortShuffleWriter: Error while deleting file /mnt/yarn/usercache/hadoop/appcache/application_1463354019776_0001/blockmgr-f847744b-c87a-442c-9135-57cae3d1f6f0/2b/temp_shuffle_3fe2e09e-f8e4-4e5d-ac96-1538bdc3b401
16/05/15 23:19:22 WARN TaskMemoryManager: leak 32.3 MB memory from org.apache.spark.unsafe.map.BytesToBytesMap#762be8fe
16/05/15 23:19:22 ERROR Executor: Managed memory leak detected; size = 33816576 bytes, TID = 14
16/05/15 23:19:22 ERROR Executor: Exception in task 13.0 in stage 1.0 (TID 14)
java.io.FileNotFoundException: /mnt/yarn/usercache/hadoop/appcache/application_1463354019776_0001/blockmgr-f847744b-c87a-442c-9135-57cae3d1f6f0/3a/temp_shuffle_b9001fca-bba9-400d-9bc4-c23c002e0aa9 (No such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Order of precedence for spark configurations is :
SparkContext (code/application) > Spark-submit > Spark-defaults.conf
So couple of things to point here -
Use YARN cluster as deploy mode and master in your spark submit command -
spark-submit --deploy-mode cluster --master yarn ...
OR
spark-submit --master yarn-cluster ...
Remove "local" string from line sc = SparkContext("local", "Simple App") in your code. Use conf = SparkConf().setAppName(appName)
sc = SparkContext(conf=conf) to initialize Spark context.
Ref - http://spark.apache.org/docs/latest/programming-guide.html

elasticsearch nest SniffingConnectionPool not working

I'm using Nest.ElasticClient to connect to Elasticsearch cluster. The cluster is located in Azure VM with just one node.
the cluster is accessible outside the vm by url : http://xxxx.cloudapp.net:9200 and also accessible by ElasticClient if not using SniffingConnectionPool. But not accessible by ElasticClient if using SniffingConnectionPool.
Here is the network config
network.host: [_local_, _site_]
Below is the source code I'm using to get client and check index exists.
var pool = new SniffingConnectionPool(urls.Select(url => new Uri(url)));
ConnectionSettings config = new ConnectionSettings(pool) ;
client = new Nest.ElasticClient(config);
IExistsResponse indexExistsResponse = client.IndexExists(indexName);
The debug info message when I try to use the client to check whether a Index exists, the hostnanme and ip address is modified:
Invalid NEST response built from a unsuccessful low level call on HEAD: /globalleads
# Audit trail of this API call:
- SniffOnStartup: Took: 00:00:00.9846171
- SniffSuccess: Node: http://xxxx.cloudapp.net:9200/ Took: 00:00:00.9595496
- PingFailure: Node: http://10.85.xxx.xx:9200/ Exception: PipelineException Took: 00:00:21.4154660
- SniffOnFail: Took: 00:00:21.1967809
- SniffFailure: Node: http://10.85.xxx.xx:9200/ Exception: PipelineException Took: 00:00:21.1787333
# OriginalException: Elasticsearch.Net.ElasticsearchClientException: One or more errors occurred. ---> System.AggregateException: One or more errors occurred. ---> Elasticsearch.Net.PipelineException: Failed sniffing cluster state. ---> System.AggregateException: One or more errors occurred. ---> Elasticsearch.Net.PipelineException: An error occurred trying to establish a connection with the specified node.
at Elasticsearch.Net.RequestPipeline.Sniff() in D:\dev\git\elasticsearch-net-2.x\src\Elasticsearch.Net\Transport\Pipeline\RequestPipeline.cs:line 326
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
at Elasticsearch.Net.RequestPipeline.Sniff() in D:\dev\git\elasticsearch-net-2.x\src\Elasticsearch.Net\Transport\Pipeline\RequestPipeline.cs:line 341
at Elasticsearch.Net.RequestPipeline.SniffOnConnectionFailure() in D:\dev\git\elasticsearch-net-2.x\src\Elasticsearch.Net\Transport\Pipeline\RequestPipeline.cs:line 301
at Elasticsearch.Net.Transport`1.Ping(IRequestPipeline pipeline, Node node) in D:\dev\git\elasticsearch-net-2.x\src\Elasticsearch.Net\Transport\Transport.cs:line 179
at Elasticsearch.Net.Transport`1.Request[TReturn](HttpMethod method, String path, PostData`1 data, IRequestParameters requestParameters) in D:\dev\git\elasticsearch-net-2.x\src\Elasticsearch.Net\Transport\Transport.cs:line 68
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
# Audit exception in step 2 PingFailure:
Elasticsearch.Net.PipelineException: An error occurred trying to establish a connection with the specified node.
at Elasticsearch.Net.RequestPipeline.Ping(Node node) in D:\dev\git\elasticsearch-net-2.x\src\Elasticsearch.Net\Transport\Pipeline\RequestPipeline.cs:line 248
# Audit exception in step 4 SniffFailure:
Elasticsearch.Net.PipelineException: An error occurred trying to establish a connection with the specified node.
at Elasticsearch.Net.RequestPipeline.Sniff() in D:\dev\git\elasticsearch-net-2.x\src\Elasticsearch.Net\Transport\Pipeline\RequestPipeline.cs:line 326
# Request:
<Request stream not captured or already read to completion by serializer. Set DisableDirectStreaming() on ConnectionSettings to force it to be set on the response.>
# Response:
<Response stream not captured or already read to completion by serializer. Set DisableDirectStreaming() on ConnectionSettings to force it to be set on the response.>

Pig filter fails due to unexpected data

I am running Cassandra and have about 20k records in it to play with. I am trying to run a filter in pig on this data but am getting the following message back:
2015-07-23 13:02:23,559 [Thread-4] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
java.lang.RuntimeException: com.datastax.driver.core.exceptions.InvalidQueryException: Expected 8 or 0 byte long (1)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initNextRecordReader(PigRecordReader.java:260)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:205)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Expected 8 or 0 byte long (1)
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:263)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:44)
at org.apache.cassandra.hadoop.cql3.CqlRecordReader$RowIterator.(CqlRecordReader.java:259)
at org.apache.cassandra.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:151)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initNextRecordReader(PigRecordReader.java:256)
... 7 more
You would think this is an obvious error, and believe me there are a ton of results on google for this. It's clear that some piece of my data isn't conforming to the expected type of a given column. What I don't understand is 1.) why this is happening, and 2.) how to debug it. If I try to insert invalid data into Cassandra from my nodejs app, it will throw this kind of error if my data type doesn't match the columns data type, which means that this shouldn't be possible? I've read that data validation using UTF8 is wonky and that setting a different kind of validation is the answer, but I don't know how to do that. Here are my steps to reproduce:
grunt> define CqlNativeStorage org.apache.cassandra.hadoop.pig.CqlNativeStorage();
grunt> test = load 'cql://blah/blahblah' USING CqlNativeStorage();
grunt> describe test;
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - Found ksDef name: blah
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - partition keys: ["ad_id"]
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - cluster keys: []
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - row key validator: org.apache.cassandra.db.marshal.UTF8Type
13:09:54.544 [main] DEBUG o.a.c.hadoop.pig.CqlNativeStorage - cluster key validator: org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type)
blahblah: {ad_id: chararray,address: chararray,city: chararray,date_created: long,date_listed: long,fireplace: bytearray,furnished: bytearray,garage: bytearray,neighbourhood: chararray,num_bathrooms: int,num_bedrooms: int,pet_friendly: bytearray,postal_code: chararray,price: double,province: chararray,square_feet: int,url: chararray,utilities_included: bytearray}
grunt> query1 = FILTER blahblah BY city == 'New York';
grunt> dump query1;
Then it runs for awhile and dumps out tons of logs and the error appears.
Discovered my problem: the pig partioner did not match CQL3, and therefore the data was being parsed incorrectly. Previously the environment variable was PIG_PARTITIONER=org.apache.cassandra.dht.RandomPartitioner. After I changed it to PIG_PARTITIONER=org.apache.cassandra.dht.Murmur3Partitioner it started working.

spray-client throwing "Too many open files" exception when giving more concurrent requests

I have a spray http client which is running in a server X, which will make connections to server Y. Server Y is kind of slow(will take 3+ sec for a request)
This is my http client code invocation:
def get() {
val result = for {
response <- IO(Http).ask(HttpRequest(GET,Uri(getUri(msg)),headers)).mapTo[HttpResponse]
} yield response
result onComplete {
case Success(res) => sendSuccess(res)
case Failure(error) => sendError(res)
}
}
These are the configurations I have in application.conf:
spray.can {
client {
request-timeout = 30s
response-chunk-aggregation-limit = 0
max-connections = 50
warn-on-illegal-headers = off
}
host-connector {
max-connections = 128
idle-timeout = 3s
}
}
Now I tried to abuse the server X with large number of concurrent requests(using ab with n=1000 and c=100).
Till 900 requests it went fine. After that the server threw lot of exceptions and I couldn't hit the server after that.
These are the exceptions:
[info] [ERROR] [03/28/2015 17:33:13.276] [squbs-akka.actor.default-dispatcher-6] [akka://squbs/system/IO-TCP/selectors/$a/0] Accept error: could not accept new connection
[info] java.io.IOException: Too many open files
[info] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
[info] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
[info] at akka.io.TcpListener.acceptAllPending(TcpListener.scala:103)
and on further hitting the same server, it threw the below exception:
[info] [ERROR] [03/28/2015 17:53:16.735] [hcp-client-akka.actor.default-dispatcher-6] [akka://hcp-client/system/IO-TCP/selectors] null
[info] akka.actor.ActorInitializationException: exception during creation
[info] at akka.actor.ActorInitializationException$.apply(Actor.scala:164)
[info] at akka.actor.ActorCell.create(ActorCell.scala:596)
[info] Caused by: java.lang.reflect.InvocationTargetException
[info] at sun.reflect.GeneratedConstructorAccessor59.newInstance(Unknown Source)
[info] Caused by: java.io.IOException: Too many open files
[info] at sun.nio.ch.IOUtil.makePipe(Native Method)
I was previously using apache http client(which was synchronous) which was able to handle 10000+ requests with concurrency of 100.
I'm not sure I'm missing something. Any help would be appreciated.
The problem is that every time you call get() method it creates a new actor that creates at least one connection to the remote server. Furthermore you never shut down that actor, so each such connection leaves until it times out.
You only need a single such actor to manage all your HTTP requests, thus to fix it take IO(Http) out of the get() method and call it only once. Reuse that returned ActorRef for all your requests to that server. Shut it down on application shutdown.
For example:
val system: ActorSystem = ...
val io = IO(Http)(system)
io ! Http.Bind( ...
def get(): Unit = {
...
io.ask ...
// or
io.tell ...
}

Resources