Is it possible to avoid .await with Elastic4s - async-await

I'm using Elastic4s (Scala Client for ElasticSearch).
I can retrieve multiGet results with await :
val client = HttpClient(ElasticsearchClientUri(esHosts, esPort))
val resp = client.execute {
multiget(
get(C1) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C2) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C3) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events")
)
}.await
val result = resp.items
But I've read that in practice it's better to avoid this ".await".
How can we do that ? thanks

You shouldn't use .await because you're blocking the thread waiting for the future to return.
Instead you should handle the future like you would any other API that returns futures - whether that be reactive-mongo, akka.ask or whatever.

I realise this is old, but in case anyone else comes across it, the simplest way to handle this would be:
val client = HttpClient(ElasticsearchClientUri(esHosts, esPort))
val respFuture = client.execute {
multiget(
get(C1) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C2) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events"),
get(C3) from "mlphi_crm_0/profiles" fetchSourceInclude("crm_events")
)
}
respFuture.map(resp=> ...[do stuff with resp.items])
The key thing here is that your processing actually takes place in a subthread, which Scala takes care of calling for you when, any only when, the data is ready for you. The caller keeps running immediately after respFuture.map(). Whatever your function in map(()=>{}) returns is passed back as a new Future; if you don't need it then use onComplete or andThen as they make error handling a little easier.
See https://docs.scala-lang.org/overviews/core/futures.html for more details on Futures handling, and https://alvinalexander.com/scala/concurrency-with-scala-futures-tutorials-examples for some good examples

Related

Running the get calls only for once in gatling

I have a scenario where i want to do a get call after running the post-call for give amount of time and the Get call is only for once after running the POST calls, any suggestions on how to do in Gatling
You could do something like this:
def basicScn: ScenarioBuilder = scenario("BasicScenario")
.exec(postReq)
.pause(5)
.exec(getReq)
val postReq: HttpRequestBuilder = http("PostRequest")
.post("/hello/world")
.body(StringBody("${body}")).asJson
val getReq: HttpRequestBuilder = http("GetRequest")
.get("/stack/over/flow")

Why does using asyncio.ensure_future for long jobs instead of await run so much quicker?

I am downloading jsons from an api and am using the asyncio module. The crux of my question is, with the following event loop as implemented as this:
loop = asyncio.get_event_loop()
main_task = asyncio.ensure_future( klass.download_all() )
loop.run_until_complete( main_task )
and download_all() implemented like this instance method of a class, which already has downloader objects created and available to it, and thus calls each respective download method:
async def download_all(self):
""" Builds the coroutines, uses asyncio.wait, then sifts for those still pending, loops """
ret = []
async with aiohttp.ClientSession() as session:
pending = []
for downloader in self._downloaders:
pending.append( asyncio.ensure_future( downloader.download(session) ) )
while pending:
dne, pnding= await asyncio.wait(pending)
ret.extend( [d.result() for d in dne] )
# Get all the tasks, cannot use "pnding"
tasks = asyncio.Task.all_tasks()
pending = [tks for tks in tasks if not tks.done()]
# Exclude the one that we know hasn't ended yet (UGLY)
pending = [t for t in pending if not t._coro.__name__ == self.download_all.__name__]
return ret
Why is it, that in the downloaders' download methods, when instead of the await syntax, I choose to do asyncio.ensure_future instead, it runs way faster, that is more seemingly "asynchronously" as I can see from the logs.
This works because of the way I have set up detecting all the tasks that are still pending, and not letting the download_all method complete, and keep calling asyncio.wait.
I thought that the await keyword allowed the event loop mechanism to do its thing and share resources efficiently? How come doing it this way is faster? Is there something wrong with it? For example:
async def download(self, session):
async with session.request(self.method, self.url, params=self.params) as response:
response_json = await response.json()
# Not using await here, as I am "supposed" to
asyncio.ensure_future( self.write(response_json, self.path) )
return response_json
async def write(self, res_json, path):
# using aiofiles to write, but it doesn't (seem to?) support direct json
# so converting to raw text first
txt_contents = json.dumps(res_json, **self.json_dumps_kwargs);
async with aiofiles.open(path, 'w') as f:
await f.write(txt_contents)
With full code implemented and a real API, I was able to download 44 resources in 34 seconds, but when using await it took more than three minutes (I actually gave up as it was taking so long).
When you do await in each iteration of for loop it will await to download every iteration.
When you do ensure_future on the other hand it doesn't it creates task to download all the files and then awaits all of them in second loop.

Using promises with Transcrypt

I'm having great fun with Transcrypt, a fantastic Python 3 to Javascript compiler available as a python module. Most of my code is synchronous, but i've had no problem doing things with setTimeout and XHR requests. Now i've started using PouchDB for local persistence and am trying to find a pretty way to handle promises. At the moment, I am doing this to write to a pouchdb instance:
def db_put():
def put_success(doc):
print("Put a record in the db. Id: ", doc.id, "rev: ", doc.rev)
def put_failure(error):
print('Failed to put a record in the db. Error: ', error)
strHello = {'_id': "1", 'title': 'hello db'}
db.put(strHello) \
.then(put_success) \
.catch(put_failure)
db = PouchDB('test_db')
document.getElementById("db_put").addEventListener("click", db_put)
This works fine, but I am curious to know a few things about promises being transcrypted from python to Javascript (this may save me from madness):
Are there more preferable 'pythonic' ways to handle this?
Can one make use of ES7's async / await through Transcrypt? Since Transcrypt allows Javascript functions to be accessed directly from within the python code, i thought there might be some trick here that i'm not getting..
Thanks!
About the promises
The way you dealt with promises looks pythonic enough to me.
In case you get tired of the line continuations where 'fluent' notation (call chaining) is involved, there's an alternative to using \. This alternative is used e.g. in the d3js_demo that comes with Transcrypt, in the following fragment:
self.svg = d3.select('body'
).append('svg'
).attr('width', self.width
).attr('height', self.height
).on('mousemove', self.mousemove
).on('mousedown', self.mousedown)
Since many .then's can be chained as well, one could write:
db.put(strHello
).then(put_success
).then(put_success_2
).then(put_success_3
... etc.
).catch(put_failure)
After some getting used to, this will immediately make clear that call chaining is involved. But it is only a matter of formatting.
about async/await
They aren't yet supported, but the plan is they will be soon after JS officially has them (JS7, I hope). For now you can use __pragma__ ('js', '{}', '''<any javascript code>''') as a workaround.
Async/await is supported some time now. You can use it to deal with Promises. For example:
Enable JQuery usage:
__pragma__ ('alias', 'S', '$')
Define a function which returns a Promise, in this case an Ajax call:
def read(url: str) -> 'Promise':
deferred = S.Deferred()
S.ajax({'type': "POST", 'url': url, 'data': { },
'success': lambda d: deferred.resolve(d),
'error': lambda e: deferred.reject(e)
})
return deferred.promise()
Use the asynchronous code as if it were synchronous:
async def readALot():
try:
result1 = await read("url_1")
result2 = await read("url_2")
except Exception:
console.warn("Reading a lot failed")
Happy usage of python in the browser

I need help getting my bot to respond a different response each time

I'm making a joke bot for tumblr that responds to question-askers with some variation of "Greetings [asker name here], piss. Yours truly, Pissy Boy.". I'm trying to include 4 possible responses instead of just one catch-all response, which is where I've hit a snag. Here's the entirety of my current code for the bot (The bot is being written using Ruby in Ubuntu):
require 'tumblr_client'
USERNAME = "thepissbot"
def piss
# Authenticate via OAuth
client = Tumblr::Client.new({
:consumer_key => ENV['CONSUMER_KEY'],
:consumer_secret => ENV['CONSUMER_SECRET'],
:oauth_token => ENV['OAUTH_TOKEN'],
:oauth_token_secret => ENV['OAUTH_TOKEN_SECRET']
})
# Make the request
asks = client.submissions(USERNAME, limit: 5)['posts']
asks.each do |ask|
if ask ['type'] !='answer'
piss
return
response1 = "Greetings #{ask['asking_name']}, piss. Yours truly, Pissy Boy."
response2 = "Dear, #{ask['asking_name']}, piss. Love, Pissy Boy."
response3 = "Salutations #{ask['asking_name']}, piss. Sincerely, Pissy Boy."
response4 = "What's up, #{ask['asking_name']}? Piss. Your friend, Pissy Boy."
array=(response1 response2 response3 response4)
tags = "piss mail"
client.edit(USERNAME,
id: ask['id'],
answer: array,
state: 'published',
tags: tags
)
end
end
end
This supposedly "works", according to the terminal. However, when I check the inbox for the bot, the test ask I had sent it remains unanswered, and that's certainly not expected behavior. I think it has something to do with the way I'm dealing with the array.
This has been a problem for 2 days now... I feel like it should be super-simple but I'm just missing something. Any help would be appreciated.
Ok now that your code is formatted correctly the error is obvious:
if ask ['type'] !='answer'
piss
return
... (some other code)
this early return will cause the entire piss method to finish. Unless there's some purpose to this return call remove it.
This should make the benefits of formatting code obvious; you can more easily see which if, each, def block a line of code is being run.

routing files with zeromq (jeromq)

I'm trying to implement a "file dispatcher" on zmq (actually jeromq, I'd rather avoid jni).
What I need is to load balance incoming files to processors:
each file is handled only by one processor
files are potentially large so I need to manage the file transfer
Ideally I would like something like https://github.com/zeromq/filemq but
with a push/pull behaviour rather than publish/subscribe
being able to handle the received file rather than writing it to disk
My idea is to use a mix of taskvent/tasksink and asyncsrv samples.
Client side:
one PULL socket to be notified of a file to be processed
one DEALER socket to handle the (async) file transfer chunk by chunk
Server side:
one PUSH socket to dispatch incoming file (names)
one ROUTER socket to handle file requests
a few DEALER workers managing the file transfers for clients and connected to the router via an inproc proxy
My first question is: does this seem like the right way to go? Anything simpler maybe?
My second question is: my current implem gets stuck on sending out the actual file data.
clients are notified by the server, and issue a request.
the server worker gets the request, and writes the response back to the inproc queue but the response never seems to go out of the server (can't see it in wireshark) and the client is stuck on the poller.poll awaiting the response.
It's not a matter of sockets being full and dropping data, I'm starting with very small files sent in one go.
Any insight?
Thanks!
==================
Following raffian's advice I simplified my code, removing the push/pull extra socket (it does make sense now that you say it)
I'm left with the "non working" socket!
Here's my current code. It has many flaws that are out of scope for now (client ID, next chunk etc..)
For now, I'm just trying to have both guys talking to each other roughly in that sequence
Server
object FileDispatcher extends App
{
val context = ZMQ.context(1)
// server is the frontend that pushes filenames to clients and receives requests
val server = context.socket(ZMQ.ROUTER)
server.bind("tcp://*:5565")
// backend handles clients requests
val backend = context.socket(ZMQ.DEALER)
backend.bind("inproc://backend")
// files to dispatch given in arguments
args.toList.foreach { filepath =>
println(s"publish $filepath")
server.send("newfile".getBytes(), ZMQ.SNDMORE)
server.send(filepath.getBytes(), 0)
}
// multithreaded server: router hands out requests to DEALER workers via a inproc queue
val NB_WORKERS = 1
val workers = List.fill(NB_WORKERS)(new Thread(new ServerWorker(context)))
workers foreach (_.start)
ZMQ.proxy(server, backend, null)
}
class ServerWorker(ctx: ZMQ.Context) extends Runnable
{
override def run()
{
val worker = ctx.socket(ZMQ.DEALER)
worker.connect("inproc://backend")
while (true)
{
val zmsg = ZMsg.recvMsg(worker)
zmsg.pop // drop inner queue envelope (?)
val cmd = zmsg.pop //cmd is used to continue/stop
cmd.toString match {
case "get" =>
val file = zmsg.pop.toString
println(s"clientReq: cmd: $cmd , file:$file")
//1- brute force: ignore cmd and send full file in one go!
worker.send("eof".getBytes, ZMQ.SNDMORE) //header indicates this is the last chunk
val bytes = io.Source.fromFile(file).mkString("").getBytes //dirty read, for testing only!
worker.send(bytes, 0)
println(s"${bytes.size} bytes sent for $file: "+new String(bytes))
case x => println("cmd "+x+" not implemented!")
}
}
}
}
client
object FileHandler extends App
{
val context = ZMQ.context(1)
// client is notified of new files then fetches file from server
val client = context.socket(ZMQ.DEALER)
client.connect("tcp://*:5565")
val poller = new ZMQ.Poller(1) //"poll" responses
poller.register(client, ZMQ.Poller.POLLIN)
while (true)
{
poller.poll
val zmsg = ZMsg.recvMsg(client)
val cmd = zmsg.pop
val data = zmsg.pop
// header is the command/action
cmd.toString match {
case "newfile" => startDownload(data.toString)// message content is the filename to fetch
case "chunk" => gotChunk(data.toString, zmsg.pop.getData) //filename, chunk
case "eof" => endDownload(data.toString, zmsg.pop.getData) //filename, last chunk
}
}
def startDownload(filename: String)
{
println("got notification: start download for "+filename)
client.send("get".getBytes, ZMQ.SNDMORE) //command header
client.send(filename.getBytes, 0)
}
def gotChunk(filename: String, bytes: Array[Byte])
{
println("got chunk for "+filename+": "+new String(bytes)) //callback the user here
client.send("next".getBytes, ZMQ.SNDMORE)
client.send(filename.getBytes, 0)
}
def endDownload(filename: String, bytes: Array[Byte])
{
println("got eof for "+filename+": "+new String(bytes)) //callback the user here
}
}
On the client, you don't need PULL with DEALER.
DEALER is PUSH and PULL combined, so use DEALER only, your code will be simpler.
Same goes for the server, unless you're doing something special, you don't need PUSH with ROUTER, router is bidirectional.
the server worker gets the request, and writes the response back to
the inproc queue but the response never seems to go out of the server
(can't see it in wireshark) and the client is stuck on the poller.poll
awaiting the response.
Code Problems
In the server, you're dispatching files with args.toList.foreach before starting the proxy, this is probably why nothing is leaving the server. Start the proxy first, then use it; Also, once you call ZMQProxy(..), the code blocks indefinitely, so you'll need a separate thread to send the filepaths.
The client may have an issue with the poller. The typical pattern for polling is:
ZMQ.Poller items = new ZMQ.Poller (1);
items.register(receiver, ZMQ.Poller.POLLIN);
while (true) {
items.poll(TIMEOUT);
if (items.pollin(0)) {
message = receiver.recv(0);
In the above code, 1) poll until timeout, 2) then check for messages, and if available, 3) get with receiver.recv(0). But in your code, you poll then drop into recv() without checking. You need to check if the poller has messages for that polled socket before calling recv(), otherwise, the receiver will hang if there's no messages.

Resources