Running the get calls only for once in gatling - performance

I have a scenario where i want to do a get call after running the post-call for give amount of time and the Get call is only for once after running the POST calls, any suggestions on how to do in Gatling

You could do something like this:
def basicScn: ScenarioBuilder = scenario("BasicScenario")
.exec(postReq)
.pause(5)
.exec(getReq)
val postReq: HttpRequestBuilder = http("PostRequest")
.post("/hello/world")
.body(StringBody("${body}")).asJson
val getReq: HttpRequestBuilder = http("GetRequest")
.get("/stack/over/flow")

Related

How to send jmeter test results to datadog?

I wanted to ask if anyone has ever saved jmeter test results (sampler names, duration, pass/fail) to Datadog? Kinda like the backend listener for influx/graphite... but for Datadog. Jmeter-plugins has no such plugin. Datadog seems to offer something called "JMX integration" but I'm not sure whether that is what I need.
I figured out how to do this using the datadog api https://docs.datadoghq.com/api/?lang=python#post-timeseries-points. The following python script takes in the jtl file (jmeter results) and posts the transaction name, response time, and status (pass/fail) to datadog.
#!/usr/bin/env python3
import sys
import pandas as pd
from datadog import initialize, api
options = {
'api_key': '<API_KEY>',
'app_key': '<APPLICATION_KEY>'
}
metrics = []
def get_current_metric(timestamp, label, elapsed, success):
metric = {}
metric.update({'metric': 'jmeter'})
metric.update({'points': [(timestamp, elapsed)]})
curtags = {}
curtags.update({'testcase': label})
curtags.update({'success': success})
metric.update({'tags': curtags})
return metric
initialize(**options)
jtl_file = sys.argv[1]
df = pd.read_csv(jtl_file)
for index, row in df.iterrows():
timestamp = row['timeStamp']/1000
label = row['label']
elapsed = row['elapsed']
success = str(row['success'])
metric = get_current_metric(timestamp, label, elapsed, success)
metrics.append(metric)
api.Metric.send(metrics)

Is it possible to avoid .await with Elastic4s

I'm using Elastic4s (Scala Client for ElasticSearch).
I can retrieve multiGet results with await :
val client = HttpClient(ElasticsearchClientUri(esHosts, esPort))
val resp = client.execute {
multiget(
get(C1) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C2) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C3) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events")
)
}.await
val result = resp.items
But I've read that in practice it's better to avoid this ".await".
How can we do that ? thanks
You shouldn't use .await because you're blocking the thread waiting for the future to return.
Instead you should handle the future like you would any other API that returns futures - whether that be reactive-mongo, akka.ask or whatever.
I realise this is old, but in case anyone else comes across it, the simplest way to handle this would be:
val client = HttpClient(ElasticsearchClientUri(esHosts, esPort))
val respFuture = client.execute {
multiget(
get(C1) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C2) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C3) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events")
)
}
respFuture.map(resp=> ...[do stuff with resp.items])
The key thing here is that your processing actually takes place in a subthread, which Scala takes care of calling for you when, any only when, the data is ready for you. The caller keeps running immediately after respFuture.map(). Whatever your function in map(()=>{}) returns is passed back as a new Future; if you don't need it then use onComplete or andThen as they make error handling a little easier.
See https://docs.scala-lang.org/overviews/core/futures.html for more details on Futures handling, and https://alvinalexander.com/scala/concurrency-with-scala-futures-tutorials-examples for some good examples

Using locust,.io to check the webapi performance

I am new to webapi calls. I have a bunch of webapi calls happening. I would like to check the performance of these calls using the locustio and swarm the users with the calls and record the time for each webapi call taken..
Below is the locustfile i have written.
from locust import HttpLocust, TaskSet, task
class MyTaskSet(TaskSet):
#task
def my_task(self):
print "executing my_task"
self.client.get('https://piprdweb.cds.com/piwebapi/streams/A0Ej3OLt_2RH0mvwiXA5DULmw-UVswK3W5hGvHhJaxRW7Owqi14iFv8x0KZLamRlaPPawUElQUkRBRlxQQ0dfUlNCT1BcUklHLjM0MlxCT1BTRU5UUllcU0lHTkFMU1xTVEFDSyBPTkVcWUVMTE9XIFBPRHxDQU4gMSBIVU1JRElUWSAtIFlFTExPVyBQT0Q/value?time=*-5s')
class MyLocust(HttpLocust):
task_set = MyTaskSet
min_wait = 5000
max_wait = 15000
Next when i am running the locust --host https://piprdweb.cds.com/piwebapi/ i am getting the below error
AttributeError
[2017-02-06 15:14:26,378] srvgdyplmvrc01.nov.com/ERROR/stderr: :
[2017-02-06 15:14:26,378] srvgdyplmvrc01.nov.com/ERROR/stderr: 'module' object has no attribute 'NSIG'
Please correct me if i am making a mistake .

Why does using asyncio.ensure_future for long jobs instead of await run so much quicker?

I am downloading jsons from an api and am using the asyncio module. The crux of my question is, with the following event loop as implemented as this:
loop = asyncio.get_event_loop()
main_task = asyncio.ensure_future( klass.download_all() )
loop.run_until_complete( main_task )
and download_all() implemented like this instance method of a class, which already has downloader objects created and available to it, and thus calls each respective download method:
async def download_all(self):
""" Builds the coroutines, uses asyncio.wait, then sifts for those still pending, loops """
ret = []
async with aiohttp.ClientSession() as session:
pending = []
for downloader in self._downloaders:
pending.append( asyncio.ensure_future( downloader.download(session) ) )
while pending:
dne, pnding= await asyncio.wait(pending)
ret.extend( [d.result() for d in dne] )
# Get all the tasks, cannot use "pnding"
tasks = asyncio.Task.all_tasks()
pending = [tks for tks in tasks if not tks.done()]
# Exclude the one that we know hasn't ended yet (UGLY)
pending = [t for t in pending if not t._coro.__name__ == self.download_all.__name__]
return ret
Why is it, that in the downloaders' download methods, when instead of the await syntax, I choose to do asyncio.ensure_future instead, it runs way faster, that is more seemingly "asynchronously" as I can see from the logs.
This works because of the way I have set up detecting all the tasks that are still pending, and not letting the download_all method complete, and keep calling asyncio.wait.
I thought that the await keyword allowed the event loop mechanism to do its thing and share resources efficiently? How come doing it this way is faster? Is there something wrong with it? For example:
async def download(self, session):
async with session.request(self.method, self.url, params=self.params) as response:
response_json = await response.json()
# Not using await here, as I am "supposed" to
asyncio.ensure_future( self.write(response_json, self.path) )
return response_json
async def write(self, res_json, path):
# using aiofiles to write, but it doesn't (seem to?) support direct json
# so converting to raw text first
txt_contents = json.dumps(res_json, **self.json_dumps_kwargs);
async with aiofiles.open(path, 'w') as f:
await f.write(txt_contents)
With full code implemented and a real API, I was able to download 44 resources in 34 seconds, but when using await it took more than three minutes (I actually gave up as it was taking so long).
When you do await in each iteration of for loop it will await to download every iteration.
When you do ensure_future on the other hand it doesn't it creates task to download all the files and then awaits all of them in second loop.

How to optimize Tornado?

The following code fetches parameter from request and respond from couchbase db as per the value of the parameter.
couchbase = Couchbase("ubuntumartini03:8091", "thebucket", "")
bucket = couchbase["thebucket"]
class MH(tornado.web.RequestHandler):
def get(self):
key = self.get_argument("pub_id", strip=True)
result = json.loads(bucket.get(key)[2])
self.write(result['metaTag'])
if __name__=="__main__":
app = tornado.web.Application(handlers=[(r"/", MH)])
app.listen(8888,"")
tornado.ioloop.IOLoop.instance().start()
Problem: For the given hardware, we can make 10k/sec calls to Couchbase from Tornado machine. But when we are making a call from client to Tornado machine, we are only able to make 350 calls/sec.
Surely the bottleneck here is Tornado. How to optimize it to be able to make atleast 7k calls/sec?
Edit your code like this :
from tornado.ioloop import IOLoop
couchbase = Couchbase("ubuntumartini03:8091", "thebucket", "")
bucket = couchbase["thebucket"]
class MH(tornado.web.RequestHandler):
async def get(self):
key = self.get_argument("pub_id", strip=True)
result = await IOLoop.current().run_in_executor(None,bucket.get,*(key))
self.write(result[2]['metaTag'])
if __name__=="__main__":
app = tornado.web.Application(handlers=[(r"/", MH)])
app.listen(8888,"")
tornado.ioloop.IOLoop.instance().start()
What client do you use, is it a synchronous or an ansynchronous client? If this is a synchronous client, it can't not make full use of the tornado ioloop reactor.

Resources