This line of code in tornado sometimes become very slow, what is the reason and what shall I do? - python-asyncio

I'm using tornado in python for my http service. I'm using python3.8 tornado6.3. I found the latency sometimes becomes obviously long.
I used time.time() to profile the program, eventually found this line https://github.com/tornadoweb/tornado/blob/341e8900524846a2c5e7a9ce5be4448e13d2d3d7/tornado/http1connection.py#L286 to be the reason.
await self._finish_future
It sometimes consume more than 1 second, especially when there are many concurrent queries. I don't quite know what it is doing. And what should I do to enhance the performance?

Related

Freezing while downloading large datasets through Shodan?

I'm using Shodan's API through the Anaconda Terminal on Windows 10 to get data against the query below, but after a few seconds of running, the ETA timer freezes, and my network activity drops to zero. Hitting Control+C restarts it when this happens and gets it moving for a few seconds again, but it stops soon after.
shodan download --limit 3100000 data state:"wa"
Also, when it is running- the download speeds seem pretty slow; and I wanted to inquire if there was any way I can speed it up? My Universities internet is capable of upwards of 300 Mbps, but the download seems to cap at 5 Mbps.
I don't know how to solve either of these issues; my device has enough space and my internet isn't disconnecting. We've tried running the Anaconda Terminal as an Administrator, but that hasn't helped either.
I am not familiar with the specific website, but in general seeing limited speed or stopped downloads are not caused by things 'on your side' like the university connection, or even your download script.
Odds are that the website wants to protect itself, and that you need to use the api differently (for example with a different account). Or that you have some usage limits in place based on your account, that you hit.
The best course of action may be to contact the website and ask them how to do this.
I heard back from Shodan support; cross-posting some of their reply here-
The API is not designed for large, bulk export of data. As a result,
you're encountering a few problems/ limits:
There is a hard limit of 1 million results per search query. This means that it isn't possible to download all results for the search query "state:wa".
The search API performs best on the first few pages and progressively responds slower the deeper into the results you get.
This means that the first few pages return instantly whereas the 100th
page will take potentially 10+ seconds.
You can only send 1 request per second so you can't multiplex/ parallelize the search requests.
A lot of high-level analysis can be performed using search facets.
There's documentation on facets in the shodan.pdf booklet floating around their site for returning summary information from their API.

Gethbase processor from 1 table Apache NIFI

gethbase >> execute_script
Hello, I have problem with backpressure object threshold when processing data from hbase to executing script with Jython. If just 1 processor is executed, my queue is always full, because the first processor is faster than the second. I was making concurrent tasks of second processor from 1 to 3 or 4 but it makes new error message. Here:
Image
Anyone here has a solution?
This might actually increase your work a bit but I would highly recommend writing Groovy for your custom implementation as opposed to Python/Jython/JRuby.
A couple of reasons for that!
Groovy was built "for the JVM" and leverages/integrates with Java more cleanly
Jython is an implementation of Python for the JVM. There is a lot of back and forth which happen between Python and JVM which can substantially increase the overhead.
If you still prefer to go with Jython, there are still a couple of things that you can do!
Use InvokeScriptedProcessor (ISP) instead of ExecuteScript. ISP is faster because it only loads the script once, then invokes methods on it, rather than ExecuteScript which evaluates the script each time.
Use ExecuteStreamCommand with command-line Python instead. You won't have the flexibility of accessing attributes, processor state, etc. but if you're just transforming content you should find ExecuteStreamCommand with Python faster.
No matter which language you choose, you can often improve performance if you use session.get(int) instead of session.get(). That way if there are a lot of flow files in the queue, you could call session.get(1000) or something, and process up to 1000 flow files per execution. If your script has a lot of overhead, you may find handling multiple flow files per execution can significantly improve performance.

Extremely high CPU usage in Django

I am helping to develop a fairly complex web application with Django. However some pages are taking 10+ seconds to render. I can't seem to get to the bottom of why this is so slow. The Django Debug Toolbar shows that the bottleneck is clearly CPU, rather than the database or anything else, but it doesn't go into any further detail (as far as I can see). I have tried running a profiler on the system, but can't make head or tail of what is going on from this either. It only shows deep internals as taking up the vast majority of the time, especially <method 'poll' of 'select.poll' objects>, and other such functions as builtins.hasattr, getmodule, ismodule, etc.
I've also tried stepping through the code manually, or pausing execution at random points to try to catch what's going on. I can see that it takes quite a long time to get through a ton of import statements, but then it spends a huge amount of time inside render() functions, taking especially long loading a large number of model fields--something in the order of 100 fields.
None of what is happening looks wrong to me, except for the insane amount of time everything takes to run. It's as though Python is just taking forever to load and parse each .py file before doing any processing. However, this is all running on a brand new Digital Ocean server with an SSD, with Gunicorn, PostgreSQL and Nginx. Does anyone have any hints how I can get to the bottom of this?

Performance problem with backgroundworkers

I have 15 BackgroundWorers that are running all the time, each one of them works for about half a second (making web request) and none of them is ever stopped.
I've noticed that my program takes about 80% of my computer's processing resources and about 15mb of memory (core 2 duo, 4gb ddr2 memory).
It it normal? web requests are not heavy duty, it just sends and awaits server response, and yes, running 15 of them is really not a pro-performance act (speed was needed) but i didn't think that it would be so intense.
I am new to programming, and i hardly ever (just as any new programmer, I assume) care about performance, but this time it is ridiculous, 80% of processing resources usage for a windows forms application with two listboxes and backgroundworkers making web requests isn't relly what expected.
info:
I use exception handling as part of my routine, which i've once read that isn't really good for performance
I have 15 background workers
My code assures none of them is ever idle
List item
windows forms, visual studio, c#.
------[edit - questions in answers]------
What exactly do you mean by "My code assures none of them is ever idle"?
The program remains waiting
while (bgw1.IsBusy || gbw2.IsBusy ... ... ...) { Application.DoWork();}
then when any of them is free, gets put back to work.
Could you give more details about the workload you're putting this under?
I make an HTTP web request object, open it and wait for the server request. It really has only a couple of lines and does no heavy processing, the half second is due to server awaiting.
In what way, and how many exceptions are being thrown?
When the page doesn't exist, there is a system.WebException, when it works it returns "OK", and about 99% of the pages i check don't exist, so i'd say about 300 exceptions per minute (putting it like this makes it sound creepy, i know, but it works)
If you're running in the debugger, then exceptions are much more expensive than they would be when not debugging
I'm not talking about running it in the debugger, I run the executable, the resulting EXE.
while (bgw1.IsBusy || gbw2.IsBusy ... ... ...) { Application.DoWork();}
What's Application.DoWork(); doing? If it's doing something quickly and returning, this loop alone will consume 100% CPU since it never stops doing something. You can put a sleep(.1) or so inside the loop, to only check the worker threads every so often instead of continuously.
This bit concerns me:
My code assures none of them is ever idle
What exactly do you mean by that?
If you're making thousands and thousands of web requests, and if those requests are returning very quickly, then that could eat some CPU.
Taking 15MB of memory isn't unexpected, but the CPU is the more worrying bit. Could you give more details about the workload you're putting this under? What do you mean by "each one of them workds for about half a second"?
What do you mean by "I use exception handling as part of my routine"? In what way, and how many exceptions are being thrown? If you're running in the debugger, then exceptions are much more expensive than they would be when not debugging - if you're throwing and catching a lot of exceptions, that could be responsible for it...
Run the program in the debugger, pause it ten times, and have a look at the stacktraces. Then you will know what is actually doing when it's busy.
From your text I read that you have a Core 2 Duo. Is that a 2 Threads or a 4 Threads?
If you have a 2 Threads you only should use 2 BackGroundworkers simultaneously.
If you have a 4 Threads then use 4 BGW's simultaneously. If you have more BGW's then use frequently the following statement:
System.Threading.Thread.Sleep(1)
Also use Applications.DOevents.
My general advice is: start simple and slowly make your application more complex.
Have a look at: Visual Basic 2010 Parallel Programming techniques.

NSThread or pythons' threading module in pyobjc?

I need to do some network bound calls (e.g., fetch a website) and I don't want it to block the UI. Should I be using NSThread's or python's threading module if I am working in pyobjc? I can't find any information on how to choose one over the other. Note, I don't really care about Python's GIL since my tasks are not CPU bound at all.
It will make no difference, you will gain the same behavior with slightly different interfaces. Use whichever fits best into your system.
Learn to love the run loop. Use Cocoa's URL-loading system (or, if you need plain sockets, NSFileHandle) and let it call you when the response (or failure) comes back. Then you don't have to deal with threads at all (the URL-loading system will use a thread for you).
Pretty much the only time to create your own threads in Cocoa is when you have a large task (>0.1 sec) that you can't break up.
(Someone might say NSOperation, but NSOperationQueue is broken and RAOperationQueue doesn't support concurrent operations. Fine if you already have a bunch of NSOperationQueue code or really want to prepare for working NSOperationQueue, but if you need concurrency now, run loop or threads.)
I'm more fond of the native python threading solution since I could join and reference threads around. AFAIK, NSThreads don't support thread joining and cancelling, and you could get a variety of things done with python threads.
Also, it's a bummer that NSThreads can't have multiple arguments, and though there are workarounds for this (like using NSDictionarys and NSArrays), it's still not as elegant and as simple as invoking a thread with arguments laid out in order / corresponding parameters.
But yeah, if the situation demands you to use NSThreads, there shouldn't be any problem at all. Otherwise, it's cool to stick with native python threads.
I have a different suggestion, mainly because python threading is just plain awful because of the GIL (Global Interpreter Lock), especially when you have more than one cpu core. There is a video presentation that goes into this in excruciating detail, but I cannot find the video right now - it was done by a Google employee.
Anyway, you may want to think about using the subprocess module instead of threading (have a helper program that you can execute, or use another binary on the system. Or use NSThread, it should give you more performance than what you can get with CPython threads.

Resources