I wanted to write a code analyser. I wanted to grab any method called at any time and run a generic méthode with the name of the one first one called in argument.
This New method would initiate a timer and incrément the number Of calls performed. Then I would lauch the initially requested one. Finaly when thé method will bé donné I would count thé Time execution and continue the rest Of execution.
Son 2 questions:
Is there any existing Benchmark tool that could do that (give the numéro of calls an time exécution method by method).
Is there anyway to redirect any méthode calme toi a generic method of a code?
Thanks in avance for you appreciated help!
Code Benchmark and get number Of calms
Related
I'm using Puppeteer sharp to render reports and part of that is executing user provided javascript to prepare the data for the report.
I use AddScriptTagAsync to add the scripts to the page then call the user provided script before rendering the report.
If the user provided javascript has an issue that causes an infinite loop (for example) then my call to EvaluateExpressionAsync could await forever:
await page.EvaluateExpressionAsync<dynamic>($"Prepare({DataObject});")
I can't pass a cancellation token to EvaluateExpressionAsync so I can't control it and there appear to be no timeouts available for this method.
I would like to limit it to a controllable number of seconds and then have it timeout.
Any suggestions on how to do this would be very much appreciated.
You would use WaitForExpressionAsync.
The idea of this method is executing the expression over a period of time until the result is truthy.
But if you make sure that your expression will always return a truthy value, WaitForExpressionAsync will timeout using the timeout you pass as an option.
TL;DR
How to safely await on function execution (takes str and int as arguments and doesn't require any other context) in a separate process?
Long story
I have aiohtto.web web API that uses Boost.Python wrapper for C++ extension, run under gunicorn (and I plan to deploy it on Heroku), tested by locust.
About extension: it have just one function that does non-blocking operation - takes one string (and one integer for timeout management), does some calculations with it and returns a new string. And for every input string, it is only one possible output (except timeout, but in that case, C++ exception must be raised and translated by Boost.Python to a Python-compatible one).
In short, a handler for specific URL executes the code below:
res = await loop.run_in_executor(executor, func, *args)
where executor is the ProcessPoolExecutor instance, and func -function from C++ extension module. (in the real project, this code is in the coroutine method of the class, and func - it's classmethod that only executes C++ function and returns the result)
Error catching
When a new request arrives, I extract it's POST data by request.post() and then storing it's data to the instance of the custom class named Call (because I have no idea how to name it in another way). So that call object contains all input data (string), request receiving time and unique id that comes with the request.
Then it proceeds to class named Handler (not the aiohttp request handler), that passes it's input to another class' method with loop.run_in_executor inside. But Handler has a logging system that works like a middleware - reads id and receiving time of every incoming call object and logging it with a message that tells you either it just starting to execute, successfully executed or get in trouble. Also, Handler have try/except and stores all errors inside the call object, so that logging middleware knows what error occurred, or what output extension had returned
Testing
I have the unit test that just creates 256 coroutines with this code inside and executor that have 256 workers and it works well.
But when testing with Locust here comes a problem. I use 4 Gunicorn workers and 4 executor workers for this kind of testing. At some time application just starts to return wrong output.
My Locust's TaskSet is configured to log every fault response with all available information: output string, error string, input string (that was returned by the application too), id. All simulated requests are the same, but id is unique for every.
The situation is better when setting Gunicorn's max_requests option to 100 requests, but failures still come.
Interesting thing is, that sometimes I can trigger "wrong output" period by simply stopping and starting Locust's test.
I need a 100% guarantee that my web API works as I expect.
UPDATE & solution
Just asked my teammate to review the C++ code - the problem was in global variables. In some way, it wasn't a problem for 256 parallel coroutines, but for Gunicorn was.
I'm currently using Calabash framework to automate functional testing for a native Android and IOS application. During my time studying it, I stumbled upon this example project from Xamarin that uses page objects design pattern which I find to be much better to organize the code in a Selenium fashion.
I have made a few adjustments to the original project, adding a file called page_utils.rb in the support directory of the calabash project structure. This file has this method:
def change_page(next_page)
sleep 2
puts "current page is #{current_page_name} changing to #{next_page}"
#current_page = page(next_page).await(PAGE_TRANSITION_PARAMETERS)
sleep 1
capture_screenshot
#current_page.assert_info_present
end
So in my custom steps implementation, when I want to change the page, I trigger the event that changes the page in the UI and update the reference for Calabash calling this method, in example:
#current_page.click_to_home_page
change_page(HomePage)
PAGE_TRANSITION_PARAMETERS is a hash with parameters such as timeout:
PAGE_TRANSITION_PARAMETERS = {
timeout: 10,
screenshot_on_error: true
}
Just so happens to be that whenever I have a timeout waiting for any element in any screen during a test run, I get a generic error message such as:
Timeout waiting for elements: * id:'btn_ok' (Calabash::Android::WaitHelpers::WaitError)
./features/support/utils/page_utils.rb:14:in `change_page'
./features/step_definitions/login_steps.rb:49:in `/^I enter my valid credentials$/'
features/04_support_and_settings.feature:9:in `And I enter my valid credentials'
btn_ok is the id defined for the trait of the first screen in my application, I don't understand why this keeps popping up even in steps ahead of that screen, masking the real problem.
Can anyone help getting rid of this annoyance? Makes really hard debugging test failures, specially on the test cloud.
welcome to Calabash!
As you might be aware, you'll get a Timeout waiting for elements: exception when you attempt to query/wait for an element which can't be found on the screen. When you call page.await(opts), it is actually calling wait_for_elements_exist([trait], opts), which means in your case that after 10 seconds of waiting, the view with id btn_ok can't be found on the screen.
What is assert_info_present ? Does it call wait_for_element_exists or something similar? More importantly, what method is actually being called in page_utils.rb:14 ?
And does your app actually return to the home screen when you invoke click_to_home_page ?
Unfortunately it's difficult to diagnose the issue without some more info, but I'll throw out a few suggestions:
My first guess without seeing your application or your step definitions is that #current_page.click_to_home_page is taking longer than 10 seconds to actually bring the home page back. If that's the case, simply try increasing the timeout (or remove it altogether, since the default is 30 seconds. See source).
My second guess is that the element with id btn_ok is not actually visible on screen when your app returns to the home screen. If that's the case, you could try changing the trait definition from * id:'btn_ok' to all * id:'btn_ok' (the all operator will include views that aren't actually visible on screen). Again, I have no idea what your app looks like so it's hard to say.
My third guess is it's something related to assert_info_present, but it's hard to say without seeing the step defs.
On an unrelated note, I apologize if our sample code is a bit outdated, but at the time of writing we generally don't encourage the use of #current_page to keep track of a page. Calabash was written in a more or less stateless manner and we generally encourage step definitions to avoid using state wherever possible.
Hope this helps! Best of luck.
I m developping a Winjs/HTML windows Store application .
I have to do some tests every period of time so let's me explain my need.
when i navigate to my specific page , I have to test (without a specific time in advance=loop)
So when my condition is verified it Will render a Flyout(Popup) and then exit from the Promise. (Set time out need a specific time but i need to verify periodically )
I read the msdn but i can't fullfill this goal .
If someone has an idea how to do it , i will be thankful.
Every help will be appreciated.
setInterval can be used.
var timerId = setInternal(function ()
{
// do you work.
}, 2000); // timer event every 2s
// invoke this when timer needs to be stopped or you move out of the page; that is unload() method
clearInternal(timerId);
Instead of polling at specific intervals, you should check if you can't adapt your code to use events or databinding instead.
In WinJS you can use databinding to bind input values to a view model and then check in its setter functions if your condition has been fulfilled.
Generally speaking, setInterval et al should be avoided for anything that's not really time-related domain logic (clocks, countdowns, timeouts or such). Of course there are situations when there's no other way (like polling remote services), so this may not apply to your situation at hand.
I'm just discovering the new (to my system) ABAP Debugger Script.
Say this is my program:
* Assume i have the class LCL_SMTH with public methods INCREMENT and REFRESH
DATA: lo_smth TYPE REF TO lcl_smth.
CREATE OBJECT LO_SMTH.
lo_smth->increment( ).
WRITE 'Nothing hapenned'.
Could i get my script to call the REFRESH method after it exits INCREMENT?
I set the script to execute on calling of the INCREMENT method, and it does so. Next I know I have to STEP OUT (F7) -> which I also do - i just don't know how to invoke the REFRESH method.
Debugger script can do exactly what you could do manually. So you can't ... unless you could manually. Since you can jump manually in the debugger, debugger script can as well. So if there is a suitable call to REFRESH somewhere in the code, then you can jump there and back as well.