Burp Sequencer, how to set significance level - entropy

I read this article How the randomness tests work and it appears the significance level for the analysis can be configured but I can't find any UI element or documentation to make that change.

I found no way of configuring a significance level but the report generated by the sequencer provides results for multiple significance levels.

Related

Understand SonarQube and its testing coverage

i am a Beginner with SonarQube and really tried to google and read a lot of community pages to understand which functions SonarQube offers.
What i dont get is: What does the test coverage in SonarQube refer to?
If it says for example that the coverage on New Code is 30% what does new code mean?
And when does SonarQube say that a issue is a bug? Is the analyzed code compared to a certain standard in order for SonarQube to say that there is a bug?
I hope someone with more knowledge about SonarQube can help me understand it. Thank you very much
Test coverage (also known as code coverage) corresponds to the proportion of the application code (i.e., code without test and sample code) that is executed by test cases out of all application code of the code base.
SonarQube does not compute code coverage itself. Instead coverage is computed and uploaded by external code coverage tools (e.g., cobertura, JaCoCo). SonarQube presents code coverage at different levels (e.g., line coverage, condition coverage); see https://docs.sonarqube.org/latest/user-guide/metric-definitions/#header-9.
Coverage on new code refers to the proportion of code that is both covered and added (or modified) since a certain baseline out of all added and changed code since the same baseline. The baseline can, for example, be the previously analyzed code state or the code state of the previous commit. That is, this metric expresses how extensively changes have been tested. Note that 100% coverage does not mean that code has been perfectly tested; it just says that all code has been executed by test cases.
Issues in SonarQube do not necessarily represent bugs. Usually most issues are actually not bugs but are problems affecting code maintainability in the long term (e.g., code duplications) or violations of best practices. Still, some issues can represent bugs (e.g., potential null-dereferences, incorrect concurrency handling).
Note that an issue can also be false a positive and therefore not be a problem at all.
Most issues are identified with static code analysis by searching the code structure for certain patterns. Some can be uncovered by simple code searches (e.g., violation of naming conventions). Other analyses / issue classes may additionally need data-flow analyses (null-dereferences) or require byte-code information.

Jmeter Report Making Approach

Usually I do run the Jmeter tests multiple times and do select a consistent result out of all runs. And further use the statistics to make the Jmeter Report.
But someone asks from my team that, we need to calculate the Average of all runs and use it for Report making.
If I do so, then I can not generate the in-built Graphs which Jmeter provides. And also the statistics which I present for the test is also not the original it's manipulated/altered by calculating the Average.
Which is the good approach to follow?
I think you can use Merge Results tool in order to join 2 or more results files into a single one and apply your analysis solution(s) onto this generated aggregate result. Moreover you will be able to compare different test runs results.
You can install the tool using JMeter Plugins Manager
I am developing now one tool based on Python + Django + Postgres which will help to run/parse/analyze/monitor JMeter load tests and compare results. In`s on some early stage but already not so bad( but sadly bad documented)
https://github.com/v0devil/JMeter-Control-Center
Also there is static report generator based on Python + Pandas. Maybe you can modify it for your tasks :) https://github.com/v0devil/jmeter_jenkins_report_generator

Sonarqube and Cucumber features

Is there any way to include the test coverage of Cucumber features and other useful statistics in the SonarQube analysis? I have done a bit of researching, but couldn't find a proper plugin.
From this thread (written after the OP's question), David Racadon added:
As far as I understand:
It is not possible to run an analysis on a project containing only test code because the 'sonar.sources' property is mandatory.
Measures on test code are not aggregated at project level.
As far as I am concerned, I consider test files part of the project the same way source files are. Thus, measures of test files should be aggregated on top of source files.
For now, SonarQube shows that your project is 1,000 lines even if you have 0 or 10,000 lines of test code on top of those 1,000 lines of source code. For me, SonarQube gives you a biased estimate of the size of your project and the effort of maintenance.
The closest would then be his plugin racodond/sonar-gherkin-plugin which:
analyzes Cucumber Gherkin feature files and:
Computes metrics: lines of code, number of scenarios, etc.
Checks various guidelines to find out potential bugs and code smells through more than 40 checks
Provides the ability to write your own checks

What are your top 3 XPages performance tips for new XPages developers?

What 3 things would you tell developers new to XPages to do to help maximize the performance of their XPages apps?
Tim Tripcony had given a bunch of suggestion here.
http://www-10.lotus.com/ldd/xpagesforum.nsf/topicThread.xsp?action=openDocument&documentId=365493C31B0352E3852578FD001435D2#AEFBCF8B111E149B852578FD001E617B
Not sure if this tipp is for beginners, but use any of the LifeCyclePhaseListeners from the OpenNTF Snippets to see what is going on in your datasources during a complete or partial refresh (http://openntf.org/XSnippets.nsf/snippet.xsp?id=a-simple-lifecyclelistener-)
Use the extension Library. Report Bugs ( or what you consider a bug ) at OpenNTF.
Use the SampleDb from the extLib. ou can easily modify the samples to your own need. Even good for testing if the issue you encounter is reproducable in this DB.
Use Firebug ( or a similar tool that comes with the browser of your choice ) If you see an error in the error tab, go and fix it.
Since you're asking for only 3, here are the tips I feel make the biggest difference:
Determine what your users / customers mean by "performance", and set the page persistence option accordingly. If they mean scalability (max concurrent users), keep pages on disk. If they mean speed, keep pages in memory. If they want an ideal mixture of speed and scalability, keep the current page in memory. This latter option really should be the server default (set in the server's xsp.properties file), overridden only as needed per application.
Set value bindings to compute on page load (denoted by a $ in the source XML) wherever possible instead of compute dynamically (denoted by a #). $ bindings are only evaluated once, # bindings are recalculated over and over again, so changing computations that only need to be loaded once per page to $ bindings speed up both initial page load and any events fired against the page once loaded.
Minimize the use of SSJS. Wherever possible, use standard EL instead (e.g. ${database.title} instead of ${javascript:return database.getTitle();}). Every SSJS expression must be parsed into an abstract syntax tree to be evaluated, which is incrementally slower than the standard EL resolver.
There are many other ways to maximize performance, of course, but in my opinion these are the easiest ways to gain noticeable improvement.
1. Use the Script Library instead writing a bulk of code into the Xpage.
2. Use the Theme or separate CSS class for each elements [Relational]
3. Moreover try to control your SSJS code. Because server side request only reduce our system performance.
4. Final point consider this as sub point of 3, Try to get the direct functions from our SSJS, Don't use the while llop and for loop for like document collection, count and other things.
The basics like
Use the immediate flags ( or one of the other flags) on serverside events if possible
Check the Flag which (forgot its name..) generates the css and js as
one big file at runtime therefore minimizing the ammount of
requests.
Choose your scope wisely. Dont put everything in your sessionscope but define when, where and how your are using the data and based on that use the correct scope. This can lead to better memory usage..
And of course the most important one read the mastering xpages book.
Other tips I would add:
When retrieving data use viewentrycollections or the viewnavigstor
Upgrade to 8.5.3
Use default html tags if possible. If you dont need the functionality of a xp:div or xp:panel use a <div> instead so you dont generate an extra uicomponent on the tree.
Define what page persistance mode you need
Depends a lot what you mean by performance. For performance of the app:
Use compute on page load wherever feasible. It significantly improves performance.
In larger XPages particularly, combine code into single controls where possible. E.g. Use a single Computed Field control combining literal strings, EL and SSJS rather than one control for each language. On that point, EL performs better than SSJS, and SSJS on the XPage performs better than SSJS in a Script Library.
Use dataContexts for properties that are calculated more than once on an XPage.
Partial Execution mode is a very strong recommendation, but probably beyond new XPages developers at this point. Java will also perform better than SSJS in a Script Library, but again beyond new developers. XPages controls you've created with the Extensibility Framework should perform better, because they should run fewer lines of Java than multiple controls, but I haven't tested that.
If you mean performance of the developer:
Get the Extension Library.
Use themes to set default properties, e.g. A standard style for all your pagers.
Use Firebug. If you're developing for Notes Client or IE, still use Firebug. You'll spend longer suffering through Client/IE thank you will fixing the few quirks that will remain.

Determining which classes would benefit most from unit testing?

I am working on a project where we have only 13% of code coverage with our unit tests. I would like to come up with a plan to improve that but by focusing first on the areas where increasing coverage would bring the greatest value.
This project is in C#, we're using VS 2008 and TFS 2008 and out unit tests are written using MSTest.
What methodology should I use to determine which classes we should tackle first?
Which metrics (code or usage) should I be looking at (and how can I get those metrics if this is not obvious)?
I would recommend adding unit tests to all the classes you touch, not retrofitting existing classes.
Most of the advantages of unit testing is in helping programmers code and ensuring that "Fixes" don't actually break anything, if you are not adding code to a new section of code that isn't every modified, the benefits of unit tests start to drop off.
You might also want to add unit tests to classes that you rely on if you have nothing better to do.
You absolutely should add tests to new functionality you add, but you should probably also add tests to existing functionality you may break.
If you are doing a big refactor, consider getting 80-100% coverage on that section first.
For some good statistics and deterministic querying of certain methods you could definitely look at NDepend: http://www.ndepend.com/
NDepend exposes a query language called CQL (Code Query Language) that allows you to write queries against your code relating to certain statistics and static analysis.
There is no true way to determine which classes might benefit the most, however by setting your own thresholds in CQL you could establish some rules and conventions.
The biggest value of a unit test is for maintenance, to ensure that the code still works after changes.
So, concentrate on methods/classes that are most likely / most frequently changed.
Next in importance are classes/methods with less-than-obvious logic. The unit tests will make them less fragile while serving as extra "documentation" for their contracted API
In general unit tests are a tool to protect against regression and regression is most likely to occur in classes with the most dependencies. You should not have to choose, you should test all classes, but if you have to, test the classes that have the most dependencies in general.
Arrange all your components into levels. Every class in a given level should only depend on components at a lower level. (A "component" is any logical group of one or more classes.)
Write unit tests for all your level 1 components first. You usually don't need mocking frameworks or another other such nonsense because these components only rely on the .NET Framework.
Once level 1 is done, start on level 2. If you level 1 tests are good, you won't need to mock those classes when you write your level 2 tests.
Continue in like fashion, working your way up the application stack.
Tip: Break all your components into level specific DLLs. That way you can ensure that low level components don't accidentally take a dependency on a higher level component.

Resources