Performance issue with geb lookup on module object attributes vs. using selectors - performance

Problem Description
I am writing a geb/spock spec which fetches test data from DB2 into a map (the map variable is called "preFilledFields" - see the "MySpec" class further down).
This map is then being iterated over, and for each iteration I check too see if the value matches one in a row on the page.
When I perform the assertion above accessing the module object attributes, then the average execution time per assertion is approx. 5-6 seconds. If I perform the assertion using selectors directly, then the average execution time per assertion is approx. 70-80 ms. See the "MyPage" class for more details regarding the assertions.
Does anyone know what could be the cause of this? Is the bad performance a result of my code, or is there a general problem with regards to performance when using modules in geb?
Appreciate any help and input I can get.
Code:
My "RowModule" class looks like this:
class RowModule extends Module {
static final PREDEFINED_ATTR = "data-predefined-amount"
static content = {
cell { $("td", it) }
description { cell(0).text() }
rubrikNum { cell(1).text().toInteger() }
preDefinedAmount { cell(0).parent("tr").$("td[$PREDEFINED_ATTR]").attr("$PREDEFINED_ATTR") }
inputField { cell(0).parent("tr").$("td input") ?: false }
dataType{ cell(0).parent("tr").attr("data-type") }
}
}
My Page class looks like this:
class MyPage extends Page {
static url = "<some_url>"
static at = { $("h1").text() == "<some_text>" }
static content = {
submitButton { $("input", name:"<some_name>") }
myPageItems {
$("table tr").collect { it.module(RowModule) }
}
}
void verifyPrePopulatedFields(name, amount) {
long now = System.currentTimeMillis();
assert amount == selvangivelseItems.find { it.dataType== name}.preDefinedAmount.toInteger()
//assert amount == $("tr[data-type='" + name+ "']").$(".skts-tooltip-holder").text().toInteger()
println "Execution time" + (System.currentTimeMillis() - now) + " ms"
}
void submit() { submitTaxReturnButton.click() }
}
My Spec file looks like this:
class MySpec extends GebReportingSpec {
#Unroll
def "field #name is pre-populated with amount #amount from the database"() {
expect:
page(MyPage) verifyPrePopulatedFields(name, amount)
where:
name << preFilledFields.keySet()
amount << preFilledFields.values()
}
}

There are no general performance problems with using modules in Geb, at least none that I know of. Your selectors on the other hand are definitely suboptimal.
Firstly by doing myPageItems.find { it.dataType == name } you are iterating over all rows in your table and executing 3 WebDriver commands (that is http request between your test and the browser that is being driven) for each of them. You could improve the selector for dataType to dataType { attr("data-type") } (not 100% sure here because I don't see your DOM structure but this is what logic would suggest) but it would still mean potentially making a lot of requests. You should instead add a site content definition like this:
myItem { dataType ->
$("table tr[data-type='$dataType']").module(RowModule)
}
And then use it like:
assert amount == myPageItem(name).preDefinedAmount.toInteger()
Secondly you can simplify and improve performance of your selectors in the module (if my assumptions about your DOM are correct):
static content = {
cell { $("td", it) }
description { cell(0).text() }
rubrikNum { cell(1).text().toInteger() }
preDefinedAmount { $("td[$PREDEFINED_ATTR]").attr("$PREDEFINED_ATTR") }
inputField { $("td input") ?: false }
dataType{ attr("data-type") }
}
You should avoid using multiple selectors for things that can be found using a single selector or using unnecessary selectors because they will always carry a performance penalty.

Related

Laravel tag all class implemeting an interface

I'm using Laravel 8 and i want to get all the classes that implements an Interface X.
I did it with symfony4 few month ago with DI :
services.yml
_instanceof:
App\Calculator\Budget\BudgetCalculatorInterface:
tags: ['app.budget_calculator']
App\Handler\CalculatorBudgetHandler:
arguments: [!tagged app.budget_calculator]
and then in my class CalculatorBudgetHandler.php
private $calculatorList = [];
public function __construct(iterable $calculatorList)
{
$this->calculatorList = $calculatorList;
}
public function __construct(iterable $calculatorList)
{
$this->calculatorList = $calculatorList;
}
public function calculate(array $data): float
{
foreach ($this->calculatorList as $calculator) {
if ($calculator->supports($data)) {
return $calculator->calculate($data);
}
}
}
but i dot not understand how to do it with Laravel. I think i have to pass all my classes in a bind or tag :
$this->app->tag([CpuReport::class, MemoryReport::class], 'reports');
thats mean if i got a new class implementing X, i have to add it in the bind/tag ?
I want to do it automatically .
thx !
I needed this too. Looked for a longer time and I basically found a solution. The bad thing about this is that in PHP classes aren't actually declared when you did not use them. So you'll have to either scan the entire project for classes and test each class to find classes implementing your interface or (better) you use the composer autoload class maps. There you could probably limit the searching scope for classes to a sub namespace.
A small but cool package working this way is this one: https://gitlab.com/hpierce1102/ClassFinder - basically it uses composer PSR4 classmaps and is in general fine performance wise.
Here is the solution to which I came:
// Add to service provider
private function tagByInterface(string $interfaceName, string $tagName, string $rootNamespace)
{
foreach (ClassFinder::getClassesInNamespace($rootNamespace, ClassFinder::RECURSIVE_MODE) as $className) {
$class = new \ReflectionClass($className);
if ($class->isAbstract() || $class->isInterface()) {
continue;
}
if ($class->implementsInterface($interfaceName)) {
$this->app->tag($className, $tagName);
}
}
}
Which can then be used like this in the register():
$this->tagByInterface(SomeInterface::class, 'some-tag', 'App\Domain\Something');
$this->app->when(SomeClass::class)->needs('$myServices')->giveTagged('some-tag');
As the classes are loaded using reflection, this operation still might take time if your root namespace is not properly set or too wide. Reflection is quick (as far as I know quicker than loading the information from cache), but you should still think of using a deferred provider for the task so that the search for implementing classes only triggers when it's actually needed.
Update some months later
This solution works, but might be a huge drain on performance if the project gets big. I'm now caching the tagged classes. Something like this:
use HaydenPierce\ClassFinder\ClassFinder as HPClassFinder;
use Illuminate\Contracts\Cache\Repository;
class InheritanceClassFinder
{
public function __construct(private ?Repository $cache = null)
{
}
public function findClassesImplementingOrExtending(string $interfaceOrClass, string $rootNamespace): array
{
if ($this->cache) {
return $this->cache->rememberForever(
'inheriting-classes-'.$interfaceOrClass,
fn () => $this->findClassesInheriting($interfaceOrClass, $rootNamespace));
}
return $this->findClassesInheriting($interfaceOrClass, $rootNamespace);
}
private function findClassesInheriting(string $interfaceOrClass, string $rootNamespace): array
{
$classes = [];
foreach (HPClassFinder::getClassesInNamespace($rootNamespace, HPClassFinder::RECURSIVE_MODE) as $className) {
if (!is_subclass_of($className, $interfaceOrClass)
|| ($class = new \ReflectionClass($className))->isAbstract() || $class->isInterface()) {
continue;
}
$classes[] = $className;
}
return $classes;
}
}
This means as long as the cache is injected, stuff will be loaded once and then taken from cache. I inject the cache only in production, so locally its a bit slower but always up to date. In production I throw away the cache with every deployment, so I get a fresh load once after every deployment.

How to refactor cascade if statements

I found this question on https://github.com/arialdomartini/Back-End-Developer-Interview-Questions#snippets
And I am curious about your opinion, I just can't find an decent solution of this refactor, and what pattern would apply in this very common case.
function()
{
HRESULT error = S_OK;
if(SUCCEEDED(Operation1()))
{
if(SUCCEEDED(Operation2()))
{
if(SUCCEEDED(Operation3()))
{
if(SUCCEEDED(Operation4()))
{
}
else
{
error = OPERATION4FAILED;
}
}
else
{
error = OPERATION3FAILED;
}
}
else
{
error = OPERATION2FAILED;
}
}
else
{
error = OPERATION1FAILED;
}
return error;
}
Do you have any idea of how to refactor this?
Actually, I feel there is way more space for refactoring than what suggested by Sergio Tulentsev.
The questions in the repo you linked are more about starting a conversation on code than closed-ended questions. So, I think it is worth to discuss the smells and design flaws of that code, to set up the refactoring goals.
Smells
I see these problems:
The code violates some of the SOLID principles. It surely violates the Open Closed Principle, as it is not possible to extend it without changing its code. E.g., adding a new operation would require adding a new if/else branch;
It also violate the Single Responsibility Principle. It just does too much. It performs error checks, it's responsible to execute all the 4 operations, it contains their implementations, it's responsible to check their results and to chain their execution in the right order;
It violates the Dependency Inversion Principle, because there are dependencies between high-level and low-level components;
It has a horrible Cyclomatic complexity
It exhibits high coupling and low cohesion, which is exactly the opposite of what is recommended;
It contains a lot of code duplication: the function Succeeded() is repeated in each branch; the structure of if/elses is replicated over and over; the assignment of error is duplicated.
It could have a pure functional nature, but it relies instead on state mutation, which makes reasoning about it not easy.
There's an empty if statement body, which might be confusing.
Refactoring
Let's see what could be done.
Here I'm using a C# implementation, but similar steps can be performed with whatever language.
I renamed some of the elements, as I believe honoring a naming convention is part of the refactoring.
internal class TestClass
{
HResult SomeFunction()
{
var error = HResult.Ok;
if(Succeeded(Operation1()))
{
if(Succeeded(Operation2()))
{
if(Succeeded(Operation3()))
{
if(Succeeded(Operation4()))
{
}
else
{
error = HResult.Operation4Failed;
}
}
else
{
error = HResult.Operation3Failed;
}
}
else
{
error = HResult.Operation2Failed;
}
}
else
{
error = HResult.Operation1Failed;
}
return error;
}
private string Operation1()
{
// some operations
return "operation1 result";
}
private string Operation2()
{
// some operations
return "operation2 result";
}
private string Operation3()
{
// some operations
return "operation3 result";
}
private string Operation4()
{
// some operations
return "operation4 result";
}
private bool Succeeded(string operationResult) =>
operationResult == "some condition";
}
internal enum HResult
{
Ok,
Operation1Failed,
Operation2Failed,
Operation3Failed,
Operation4Failed,
}
}
For the sake of simplicity, I supposed each operation returns a string, and that the success or failure is based on an equality check on the string, but of course it could be whatever. In the next steps, it would be nice if the code is independent from the result validation logic.
Step 1
It would be nice to start the refactoring with the support of some test harness.
public class TestCase
{
[Theory]
[InlineData("operation1 result", HResult.Operation1Failed)]
[InlineData("operation2 result", HResult.Operation2Failed)]
[InlineData("operation3 result", HResult.Operation3Failed)]
[InlineData("operation4 result", HResult.Operation4Failed)]
[InlineData("never", HResult.Ok)]
void acceptance_test(string failWhen, HResult expectedResult)
{
var sut = new SomeClass {FailWhen = failWhen};
var result = sut.SomeFunction();
result.Should().Be(expectedResult);
}
}
Our case is a trivial one, but being the quiz supposed to be a job interview question, I would not ignore it.
Step 2
The first refactoring could be getting rid of the mutable state: each if branch could just return the value, instead of mutating the variable error. Also, the name error is misleading, as it includes the success case. Let's just get rid of it:
HResult SomeFunction()
{
if(Succeeded(Operation1()))
{
if(Succeeded(Operation2()))
{
if(Succeeded(Operation3()))
{
if(Succeeded(Operation4()))
return HResult.Ok;
else
return HResult.Operation4Failed;
}
else
return HResult.Operation3Failed;
}
else
return HResult.Operation2Failed;
}
else
return HResult.Operation1Failed;
}
We got rid of the empty if body, making in the meanwhile the code slightly easier to reason about.
Step 3
If now we invert each if statement (the step suggested by Sergio)
internal HResult SomeFunction()
{
if (!Succeeded(Operation1()))
return HResult.Operation1Failed;
if (!Succeeded(Operation2()))
return HResult.Operation2Failed;
if (!Succeeded(Operation3()))
return HResult.Operation3Failed;
if (!Succeeded(Operation4()))
return HResult.Operation4Failed;
return HResult.Ok;
}
we make it apparent that the code performs a chain of executions: if an operation succeeds, the next operation is invoked; otherwise, the chain is interrupted, with an error. The GOF Chain of Responsibility Pattern comes to mind.
Step 4
We could move each operation to a separate class, and let our function receive a chain of operations to execute in a single shot. Each class would deal with its specific operation logic (honoring the Single Responsibility Principle).
internal HResult SomeFunction()
{
var operations = new List<IOperation>
{
new Operation1(),
new Operation2(),
new Operation3(),
new Operation4()
};
foreach (var operation in operations)
{
if (!_check.Succeeded(operation.DoJob()))
return operation.ErrorCode;
}
return HResult.Ok;
}
We got rid of the ifs altogether (but one).
Notice how:
The interface IOperation has been introduced, which is a preliminary move to decouple the function from the operations, complying the with the Dependency Inversion Principle;
The list of operations can easily be injected into the class, using the Dependency Injection.
The result validation logic has been moved to a separate class Check, injected into the main class (Dependency Inversion and Single Responsibility are satisfied).
internal class SimpleStringCheck : IResultCheck
{
private readonly string _failWhen;
public Check(string failWhen)
{
_failWhen = failWhen;
}
internal bool Succeeded(string operationResult) =>
operationResult != _failWhen;
}
We gained the ability to switch the check logic without modifying the main class (Open-Closed Principle).
Each operation has been moved to a separate class, like:
internal class Operation1 : IOperation {
public string DoJob()
{
return "operation1 result";
}
public HResult ErrorCode => HResult.Operation1Failed;
}
Each operation knows its own error code. The function itself became independent from it.
Step 5
There is something more to refactor on the code
foreach (var operation in operations)
{
if (!_check.Succeeded(operation.DoJob()))
return operation.ErrorCode;
}
return HResult.Ok;
}
First, it's not clear why the case return HResult.Ok; is handled as a special case: the chain could contain a terminating operation never failing and returning that value. This would allow us to get rid of that last if.
Second, our function still has 2 responsibility: to visit the chain, and to check the result.
An idea could be to encapsulate the operations into a real chain, so our function could reduce to something like:
return operations.ChainTogether(_check).Execute();
We have 2 options:
Each operation knows the next operation, so starting from operation1 we could execute the whole chain with a single call;
Operations are kept unaware of being part of a chain; a separate, encapsulating structure adds to operations the ability to be executed in sequence.
I'm going on with the latter, but that's absolutely debatable. I'm introducing a class modelling a ring in a chain, moving the code away from our class:
internal class OperationRing : IRing
{
private readonly Check _check;
private readonly IOperation _operation;
internal IRing Next { private get; set; }
public OperationRing(Check check, IOperation operation)
{
_check = check;
_operation = operation;
}
public HResult Execute()
{
var operationResult = _operation.DoJob();
if (_check.Succeeded(operationResult))
return Next.Execute();
return _operation.ErrorCode;
}
}
This class is responsible to execute an operation and to handle the execution to the next ring if it succeeded, or to interrupt the chain returning the right error code.
The chain will be terminated by a never-failing element:
internal class AlwaysSucceeds : IRing
{
public HResult Execute() => HResult.Ok;
}
Our original class reduces to:
internal class SomeClass
{
private readonly Check _check;
private readonly List<IOperation> _operations;
public SomeClass(Check check, List<IOperation> operations)
{
_check = check;
_operations = operations;
}
internal HResult SomeFunction()
{
return _operations.ChainTogether(_check).Execute();
}
}
In this case, ChainTogether() is a function implemented as an extension of List<IOperation>, as I don't believe that the chaining logic is responsibility of our class.
That's not the right answer
It's absolutely debatable that the responsibilities have been separated to the most appropriate classes. For example:
is chaining operations a task of our function? Or should it directly receive the chained structure?
why the use of an enumerable? As Robert Martin wrote in "Refactoring: Improving the Design of Existing Code": enums are code smells and should be refactored to polymorphic classes;
how much is too much? Is the resulting design too complex? Does the complexity of the whole application need this level of modularisation?
Therefore, I'm sure there are several other ways to refactor the original function. In a job interview, or in a pair programming session, I expect a lot of discussions and evaluations to occur.
You could use early returns here.
function() {
if(!SUCCEEDED(Operation1())) {
return OPERATION1FAILED;
}
if(!SUCCEEDED(Operation2())) {
return OPERATION2FAILED;
}
if(!SUCCEEDED(Operation3())) {
return OPERATION3FAILED;
}
if(!SUCCEEDED(Operation4())) {
return OPERATION4FAILED;
}
# everything succeeded, do your thing
return S_OK;
}

Is there a way to let Apollo Client globally insert empty strings during loading?

I'm using Apollo Client to receive the GraphQL data for my application. Over time, I see a pattern emerging where for every value I'm querying, I have to include a conditional statement to handle the moment where my data is still loading.
Assume a query looks like this:
query TestQuery($userId: Int!) {
getUser(id: $userId) {
name
}
}
Then, in every place where I want to display the user name, I have to write something like:
{ !this.props.data.loading && this.props.data.getUser.name }
or
{ this.props.data.getUser && this.props.data.getUser.name }
I don't want to display "Loading..." or a rotating spinner in any of these places. Is there a way to avoid this conditional statement by globally replacing all this.props.data.x.y.z values with null or an empty String during loading?
If so, how? Would this be considered an antipattern or bad practice?
If not, which of the above two forms is preferred?
Thanks.
How about this approach?
class GraphqlComponent extends React.Component {
renderError(){
// ...
}
renderLoading(){
// ...
}
renderLoaded(){
}
render(){
const { loading, error } = this.props;
if(error){
return renderError();
}
if(loading){
return renderLoading();
}
return renderLoaded();
}
}
class MyComponent extends GraphqlComponent{
renderLoaded(){
// your logic goes here
}
}

.Max() method giving error when used in an if else

My application is in Asp.Net coded in C# and i'm using LINQ for database transactions. My requirement is to get the Max value of the records saved in a certain table, for this i'm using Max() method.
Below is my controller code :
[HttpPost]
public ActionResult Create(Entity_Name Entity_Object)
{
if (Entity_Object.Condition == true)
{
My required code
}
else
{
var get_Max_Number = db.Entity_Name.ToList();
long Max_Number = 0;
if (get_Max_Number.Count() > 0)
{
Max_Number = Convert.ToInt64(get_Max_Number.Max());
}
My required code
}
}
My issue is when i remove the If-else condition then the same Max() method query works perfect, but when i add the If-else statement then i gets the following error.
Error:
At least one object must implement IComparable.
What i tried :
I attempted to remove the If-Else
I placed the Max() method logic above the If-else
Placing the Max() method above If-Else
[HttpPost]
public ActionResult Create(Entity_Name Entity_Object)
{
var get_Max_Number = db.Entity_Name.ToList();
long Max_Number = 0;
if (get_Max_Number.Count() > 0)
{
Max_Number = Convert.ToInt64(get_Max_Number.Max());
}
if (Entity_Object.Condition == true)
{
My required code
}
else
{
My required code
}
}
Max() needs to know what you're getting the maximum of. If you're Entity_Name class contains a number of properties (strings, ints etc...) then you need to tell it what to get the Maximum on the basis of.
Another thing, you're connecting to a DB via Linq from the looks of things, but executing your Count() & Max() functions in memory after you've retrieved the entire contents of the database table. This will be very inefficient as the table grows in size. LinqToSql & LinqToEF support pushing those functions down to the database level. I'd recommend changing your code to the following.
[HttpPost]
public ActionResult Create(Entity_Name Entity_Object)
{
if (Entity_Object.Condition == true)
{
//My required code
}
else
{
long Max_Number = 0;
if(db.Entity_Name.Count() > 0)
{
Max_Number = Convert.ToInt64(
db.Entity_Name.Max(x => x.PropertyToGetMaxOf)
);
}
//My required code
}
}

Grails validation dependent on related domains

I have several domain classes that are related and I am trying to figure out how to implement a constraint that depends on multiple domains. The jist of the problem is:
Asset has many Capacity pool objects
Asset has many Resource objects
When I create/edit a resource, need to check that total resources for an Asset doesn't exceed Capacity.
I created a service method that accomplishes this, but shouldn't this be done via a validator in the Resource domain? My service class as listed below:
def checkCapacityAllocation(Asset asset, VirtualResource newItem) {
// Get total Resources allocated from "asset"
def allAllocated = Resource.createCriteria().list() {
like("asset", asset)
}
def allocArray = allAllocated.toArray()
def allocTotal=0.0
for (def i=0; i<allocArray.length; i++) {
allocTotal = allocTotal.plus(allocArray[i].resourceAllocated)
}
// Get total capacities for "asset"
def allCapacities = AssetCapacity.createCriteria().list() {
like("asset", asset)
}
def capacityArray = allCapacities.toArray()
def capacityTotal = 0.0
for (def i=0; i<capacityArray.length; i++) {
capacityTotal += capacityArray[i].actualAvailableCapacity
}
if (allocTotal > capacityTotal) {
return false
}
}
return true
}
The problem I am having is using this method for validation. I am using the JqGrid plugin (with inline editing) and error reporting is problematic. If I could do this type of validation in the domain it would make things a lot easier. Any suggestions?
Thanks so much!
To use the service method as a validator, you'll need to inject the service into your domain, then add a custom validator that calls it. I think it'll look something like this:
class Asset {
def assetService
static hasMany = [resources: Resource]
static constraints = {
resources(validator: { val, obj ->
obj.assetService.checkCapacityAllocation(obj, val)
})
}
}
How about:
def resourceCount = Resource.countByAsset(assetId)
def assetCapacityCount = AssetCapacity.countByAsset(assetId)
if(resourceCount < assetCapacityCount) return true
return false
HTH

Resources