Optimization in XPath - xpath

I'm doing some Selenium tests now and I have code like this:
Assert.IsTrue(selenium.IsElementPresent("//div[text()='RSS Feed']"));
Assert.IsTrue(selenium.IsElementPresent("//div[#id='btnLogout_Container']"));
i've replaced it with this:
Assert.IsTrue(selenium.IsElementPresent("//div/dl/dt/a/div[text()='RSS Feed']"));
Assert.IsTrue(selenium.IsElementPresent("//tbody/tr/td/div[#id='btnLogout_Container']"));
Then I ran some testes and timed it - results were the same, difference was only in 0.001 second. So I wonder, does this changing (adding a more detailed way to XPath) affects the speed of program and lowers time required to find an element on page?

It depends on the XPath processor and on many other things that you have told us nothing about. If your source document is small, for example, compiling the XPath expression will take much longer than executing it.

Related

Lost Duration while Debugging Apex CPU time limit exceeded

I'm open to posting the code in this section to work through the optimization but its a bit length and complex, so instead I'm hoping that somebody can assist me with a few debugging questions I have. My goal is to find out what is causing my Apex CPU Time Limit Exceeded issue.
When using the Debug Log in its basic or normal layout I receive the message
Maximum CPU Time: 15062 out of 10,000 ** Close to Limit
I've optimized and re-wrote various loops and queries several times now and in each case this number concludes around there which leads me to believe it is lying to me and that my actual usage far exceeds that number. So on my journey I switched the Log Panels of the Developer Console to Analysis in hopes of isolating exactly what loop, method, or area of the code is giving me a headache.
This leads me to my main question and problem.
Execution Tree, Performance Tree & Executed Units
All show me that my durations UNDER the 10,000ms allowance. My largest consumption is 3,556.19ms which is being used by a wrapper class I created and consumed in the constructor method where there is a fair amount of logic that is constructing a fairly complicated wrapper class that spans over 5-7 custom objects. Still even with those 3,000ms the remainder of the process shows at negligible times bringing my total around 4,000ms. Again my question is.... Why am I unable to see or find what is consuming all my time?
Incorrect Iteration Data
In addition to this, on the Performance tree there is a column of data that shows the number of iterations for each method. I know that my Production Org has 81 objects that would essentially call the constructor for my custom wrapper object. I.E. my Constructor SHOULD be called 81 times, but instead it is called 32 times. So my other question is can I rely on the iteration data in the column? Or because it was iterating so many times does it stop counting at a certain point? Its possible that one of my objects is corrupted or causing an infinite loop somehow, but I don't want to dig through all the data in search of that conclusion if its a known issue that the iteration data is not accurate anyway.
System.Debug in the Production org
The Last question is why my System.Debug() lines are not displaying in my Developer Console on the production org. I've added serveral breadcrumbs throughout the code that would help me isolate just which objects are making it through and which are not, however, I cannot in any layout view system.debug messages outside of my Sandbox.
Sorry for the wealth of questions but I did want to give an honest effort to better understand the debugging process in Salesforce. If this is a lost cause I'm happy to start sharing some code as well but hopefully some debugging tips can get me to the solution.
It's likely your debug log got truncated, see "Each debug log must be 20 MB or smaller. If it exceeds this amount, you won’t see everything you need." in https://trailhead.salesforce.com/en/content/learn/modules/apex_basics_dotnet/debugging_diagnostics
Download the log and search for text similar to "skipped 123456 bytes of detailed log" to confirm, some system.debug statements will just not show up.
You might have to fine-tune the log levels (don't log validation rules and workflows? don't log every single variable assignment with "FINE" level etc). You might have to set all flags to NONE, then track only 1 particular class/trigger that you suspect (see https://help.salesforce.com/articleView?id=code_debug_log_classes.htm&type=5 and https://salesforce.stackexchange.com/questions/214380/how-are-we-supposed-to-use-debug-logs-for-a-specific-apex-class-only)
If it's truncated it's possible analysis tools give up (I had mixed luck with console to be honest, sometimes https://apextimeline.herokuapp.com/ is great to give overview - but it'll also fail to parse a 20 MB log...
When all else fails you can load up the log into Notepad++ (or any editor of your choice), find lines related to method entry/method exit (you might need a regular expression search), take these filtered lines tor excel, play with "text to columns" and just look at timing manually, see if there's a record that causes the spike. Because it could be #10 that's the problem, the fact it exhausts limits on #32 of 81 doesn't mean much. Search like [METHOD_ENTRY|METHOD_EXIT]MyTriggerHandler.onBeforeUpdate could be a good start. But first thing is to make sure log is not truncated.

How to find out what implicit(s) are used in my scala code

Problem statement:
I read in multiple sources/articles that implicits drive up scala compilation time
I want to remove/reduce them to minimum possible to see what compilation time without them will look like (codebase is around 1000 files of various complexity based on scalaz & akka & slick)
I don't really know what kind of static analysis I can perform. Any liks/references to already existing tooling highly appreciated.
It is true that implicits can degrade compilation speed, especially for code that uses them for type-level computations. It is definitely worth it to measure their impact. Unfortunately it can be difficult to track down the culprits. There are tools that can help though:
Run scalac with -Ystatistics:typer to see how many tree nodes are processed during type-checking. E.g. you can check the number of ApplyToImplicitArgs and ApplyImplicitView relative to the total (and maybe compare this to another code base).
There is currently an effort by the Scala Center to improve the status quo hosted at scalacenter/scalac-profiling. It includes an sbt plugin that should be able to give you an idea about implicit search times, but it's still in its infancy (not published yet at the time of writing). I haven't tested it myself but you can still give it a try.
You can also compile with -Xlog-implicits, pipe the output to a file and analyse the logs. It will show a message for each implicit candidate that was considered but failed, complete with source position, search type and reason for the failure. Such failed searches are expensive. You can write a simple script with your favourite scripting language (why not Scala?) to summarize the data and even plot it with some nice graphics.
Aside: How to resolve a specific implicit instance?
Just use reify and good ol' println debugging:
import scala.collection.SortedSet
import scala.reflect.runtime.universe._
println(showCode(reify { SortedSet(1,2,3) }.tree))
// => SortedSet.apply(1, 2, 3)(Ordering.Int)
There is scalac profiling being done by scala center now: https://scalacenter.github.io/scalac-profiling/

Are Sitecore's sublayout rendering stats incorrect?

The built-in Sitecore rendering stats http://<sitename>/sitecore/admin/stats.aspx is really helpful for identifying inefficient and slow-loading XSLT renders. Recently I've started switching to .ascx sub layouts to take advantage of the Sitecore C# API which can help improve performance when used correctly.
However, I've noticed that sub layouts (as opposed to XSLT renders) are not reported correctly on the stats page. See the screenshot below....
I know for a fact that this sub layout takes about 1.8 seconds to generate (I calculated this in the code-behind). Caching is turned off. I've refreshed the page 20 times to ensure I get an average. You will see that the "Avg. items" is always 0 - I can live with this - but the "Avg. time (ms)" is less than 1ms which is just clearly wrong.
Does anyone have any insights into this? Has anyone found a way to get it to work correctly?
Judging whether a statistic is right/wrong is going to rely on understanding exactly what it is measuring.
Digging around in Sitecore.Diagnostics.Statistics using Reflector I note the following:
Sitecore.Web.UI.Webcontrol contains a field m_timer
This is 'started' in the BeforeRender() method and 'stopped' in the AfterRender() method
Data from that timer is sent to Statistics.AddRenderingData() and is logged against the control
This means it is measuring the time taken to render the control, which for an XSLT includes the processing time for preparing all the data in it, but as much of the work of a normal ASCX is done prior to the Render-stage the statistic is much less useful. Incorporating the Load stage in the time would inadvertently include the processing time for all child components, since the Load sequence is chained and called recursively, so that probably doesn't help much either.
I suspect there is no good way of measuring the processing time for a specific ASCX control (excluding children) without first acquiring cumulative data then post-processing the call chain and splitting the time apart. This is the sort of thing RedGate ANTS does really well, but might not be so good if it was being executed on a live production system, given the overheads.

Web site performance - what is it about? Readability and ignorance Vs. Performance

Let me cut to the chase...
On one hand, many of the programming advices given (here and on other places) emphasize the notion that code should always be as readable and as clear as possible, at (almost?!) any pefromance cost.
On the other hand there are SO many slow web sites (at least one of whom, I know from personal experience).
Obviously round trips and DB access, are issues a web developer should always keep in mind. But the trade-off between readability and what not to do because it slows things down, for me is very unclear.
Question are- 1.What else? 2.Is there a rule (preferably simple, but probably quite general) one should adhere to in order to make sure his code does not slow things down too much?
General best practices as well as specific advices would be much appreciated. Advices based on experience would be especially appreciated.
Thanks.
Edit: A little clarification: General peformance advices aren't hard to find. That's not what I'm looking for. I'm asking about two things- 1. While trying to make my code as readable as possible, when should I stop and say: "Now I'm hurting performance too much". 2. Little, less known things like- is selecting just one column faster than selecting all (Thanks Otávio)... Thanks again!
See the Stack Overflow discussion here:
What is the most important effect on performance in a database-backed web application?
The top voted answer was, "write it clean, and use a profiler to identify real problems and address them."
In my experience, the biggest mistake (using C#/asp.net/linq) is over-querying due to LINQ's ease-of-use. One huge query is usually much faster than 10000 small ones.
The other ASP.NET gotcha I see a lot is when the viewstate gets extremely fat and bloated. EnableViewState=false is your best friend, start every new project with it!
For web applications that have a database back end, it is extremely important that:
indexing is done properly
retrieval is done for what is needed (avoid select * when selecting specific fields will do - even more so if they are part of a covered index)
Also, whenever possible an appropriate caching strategy can help performance
Optimizing your code.
While making your code as readable as possible is very important. Optimizing it is equally as important. I've listed some items that will hopefully get you in the right direction.
For example in regards to Databases:
When you define the schema of your database, you should make sure that it is normalized and the indexes of fields are defined properly.
When running a query, specifically SELECT, only select the fields you need.
You should only make one connection to the database per page load.
Re-factor. This is probably the most important factor in producing clean, optimized code. Always go back and look at your code and see what can be done to improve it.
PHP Code:
Always test your work with a tool like PHPUnit.
echo is faster than print.
Wrap your string in single quotes (‘) instead of double quotes (“) is faster because PHP searches for variables inside “…” and not in ‘…’, use this when you’re not using variables you need evaluating in your string.
Use echo’s multiple parameters (or stacked) instead of string concatenation.
Unset or null your variables to free memory, especially large arrays.
Use strict code, avoid suppressing errors, notices and warnings thus resulting in cleaner code and less overheads. Consider having error_reporting(E_ALL) always on.
Incrementing an undefined local variable is 9-10 times slower than a pre-initialized one.
Methods in derived classes run faster than ones defined in the base class.
Error suppression with # is very slow.
Website Optimization
A good place to start is here (http://developer.yahoo.com/performance/rules.html)
Performance is a huge topic and there are a lot of things that you can do to help improve the performance of your website. It's something that takes time and experience.
Best,
Richard Castera
Scott and Rcastera did a good job covering DB and querying optimization. To address your question from a HTML / CSS / JavaScript standpoint:
CSS:
Readability is key. CSS is rendered so fast that you should never feel it is necessary to sacrifice readability for performance. As such, focus on adding in as many comments as necessary to document the code, why certain rules (like hacks) are there, and whatever else floats your comment boat. In CSS there are a few obvious rules to follow: 1) Use external stylsheets. 2) Limit external stylesheets to limit GET requests.
HTML: Like CSS, HTML is read so fast by the browser you should really only focus on writing clean code. Use whitespace, indentation, and comments to properly document your work. Only major things in HTML to remember are: 1) declare the <meta charset /> early within the head section. 2) Follow this guys advice to minimize browser reflows. *this rule actually applies to CSS as well.
JavaScript: Most optimizations for JavaScript are really well known by now so these'll seem obvious, like initializing variables outside of loops, pushing javascript to bottom of body so DOM loads before scripts start tying up all of the resources, avoiding costly statements like eval() or with(). Not to sound like a broken record, but keeping a well commented and easily readable script should still be a priority when developing JavaScript code. Especially since you can just minimize and compress away all the excess when you deploy it.

How can I find the performance bottlenecks in my Ruby application?

I have written a Ruby application which parses lots of data from sources in different formats html, xml and csv files. How can I find out what areas of the code are taking the longest?
Is there any good resources on how to improve the performance of Ruby applications? Or do you have any performance coding standards you always follow?
For example do you always join your string with
output = String.new
output << part_one
output << part_two
output << '\n'
or would you use
output = "#{part_one}#{part_two}\n"
Well, there are certain well known practices like string concatenation is way slower than "#{value}" but in order to find out where you script is consuming most of its time or more time than required, you need to do profiling. There is a ruby gem called ruby-prof. The profiler can bring to your notice even those performance issues that may rarely occur. I have been using it a lot and find it very helpful. Here is some information about it directly from its official site
ruby-prof is a fast code profiler for
Ruby. Its features include:
Speed - it is a C extension and therefore many times faster than the
standard Ruby profiler.
Modes - Ruby prof can measure a number of different parameters,
including
call times, memory usage and object allocations.
Reports - can generate text and cross-referenced html reports
Flat Profiles - similar to the reports generated by the standard Ruby
profiler
Graph profiles - similar to GProf, these show how long a method runs,
which methods call it and which
methods it calls.
Call tree profiles - outputs results in the calltree format
suitable for the KCacheGrind profiling
tool.
Threads - supports profiling multiple threads simultaneously
Recursive calls - supports profiling recursive method calls
You can test the performance of individual sections of code with the standard Benchmark module.
You could also test your code on different implementations of Ruby (eg 1.9, Rubinius) and see if that speeds things up.
Of course if your problems are algorithmic in nature then there's not too much point in worrying about things like string concatenation speeds...
The answer to the string concatenation is here: https://web.archive.org/web/20090122123342/http://blog.cbciweb.com/2008/06/10/ruby-performance-use-double-quotes-vs-single-quotes
Besides what's written above I also recommend watching the Scaling Ruby screencast. It has some interesting tips&tricks on how to write faster Ruby code.
There is also a set of dTrace tools for Ruby you can use in the dTraceToolkit

Resources