How can I combine Fortify FPRs? - static-analysis

I am trying to combine the fprs from multiple different application scans. I have tried
FPRUtility -merge -project foo.fpr -source bar.fpr -f foobar.fpr
But that doesn't seem to do the trick. When I generate a report of foobar.fpr, I only see the results for one of the scans.
Any ideas?

According to Micro Focus's Fortify Audit Workbench User Guide and the Static Code Analyzer User Guide, you can only merge if your project contain the same analysis information. Meaning the scans must be performed on the same source code, same fortify settings, and same security content.
UPDATE
While the above is true regarding fprs, It is possible to merge scan results. You cannot merge fprs from different source codes. However, you CAN merge scan results and generate an fpr based on that.
In your /path/to/.fortify/path/to/build/ you will find a directory named after your <build_id>, that directory contains intermediate files that fortify generates it's fpr off of.
You can run
sourceanalyzer -b <build_id_1> -b <build_id_2> -b <build_id_3> -scan -f combined.fpr
This will generate an fpr that contains scan results from different builds/applications.

Related

How can I ignore test file in codeql?

I want to ignore test files in codeql result.
but this query includes test files.
import codeql.ruby.AST
from RegExpLiteral t, File f
where not f.getBaseName().regexpMatch("spec")
select t
ignore test files in the result
regexpMatch requires that the given pattern matches the entire receiver. In your case that means it would only succeed if the file name is exactly "spec". Maybe you rather want to test for ".*spec.*" (or use matches("%spec%")).
I am not sure though if that answers your question. As far as I know there is in general no direct way to ignore test sources. You could however do one of the following things:
Exclude the test directory when building the CodeQL database; for GitHub code scanning see the documentation
For GitHub code scanning filter out non-application code alerts in the repository alerts list (see documentation)
Manually add conditions to your query which exclude tests, for example a file name check as you have done or checking the code for certain test-related constructs

SVN: How to list author, date and comments from svn log

I am using SVN on Windows 10 machine. I want to list Author, Date and Comment of all commits within a date range. So I want to report 1 line per commit and each line has 3 columns. How can I do that?
I want to be able to copy that report and paste in Excel.
Thanks
Short answer
Nohow. You can't change format of log output in pure SVN, you can only disable (-q option) log-message in separate line(s)
Longer answer
Because svn log have always single (documented) format of output and -r option accept date as parameters you can write appropriate log-command and post-process results (in standard human-readable form or in xml-output)
Long answer
If generating different custom reports from SVN-repositories is your long-running regular task, you can to think (at least) about using Mercurial (with hgsubversion) as interface for processing data. With HG you'll have
- transparent access to original SVN-repos
- full power of templating and revsets for extracting and manipulating of data for your needs and requirements
What you are looking for is called the Subversion Webview. These are third party mostly free to use web view of your repository where you can filter out commints like the following:
You can either filter there in the view or copy it in excel and add a filter yourself.
Hope this helps.

Sphinx docs including unit test output

This is a how-to/best-practice question.
I have a code base with a suite of unit tests run with pytest
I have a set of *.rst files which provide explanation of each test, along with a table of results and images of some mathematical plots
Each time the pytest suite runs, it dynamically updates the *.rst files with the results of the latest test data, updating numerical values, time-stamping the tests, etc
I would like to integrate this with the project docs. I could
Build these rst files separately with sphinx-build whenever I want to view the test results [this seems bad, since it's labor intensive and not automated]
tell Sphinx to render these pages separately and include them in the project docs [better, but I'm not sure how to configure this]
have a separate set of sphinx docs for the test results which I can build after each run of the test suite
Which approach (or another approach) is most effective? Is there a best practice for doing this type of thing?
Maybe take a look into Sphinx-Test-Reports, which reads in all information from junit-based xml-files (pytest supports this) and generates the output during the normal sphinx build phase.
So you are free to add custom information around the test results.
Example from webpage:
.. test-report:: My Report
:id: REPORT
:file: ../tests/data/pytest_sphinx_data_short.xml
So complete answer to your question: Take none of the given approaches and let a sphinx-extension do it during build-time.

Expressions in a build rule "Output Files"?

Can you include expressions in the "Output Files" section of a build rule in Xcode? Eg:
$(DERIVED_FILE_DIR)$(echo "/dynamic/dir")/$(INPUT_FILE_BASE).m
Specifically, when translating Java files with j2objc, the resulting files are saved in subfolders, based on the java packages (eg. $(DERIVED_FILE_DIR)/com/google/Class.[hm]). This is without using --no-package-directories, which I can't use because of duplicate file names in different packages.
The issue is in Output Files, because Xcode doesn't know how to search for the output file at the correct location. The default location is $(DERIVED_FILE_DIR)/$(INPUT_FILE_BASE).m, but I need to perform a string substitution to insert the correct path. However any expression added as $(expression) gets ignored, as it was never there.
I also tried to export a variable from the custom script and use it in Output Files, but that doesn't work either because the Output Files are transformed into SCRIPT_OUTPUT_FILE_X before the custom script is ran.
Unfortunately, Xcode's build support is pretty primitive (compared to say, make, which is third-odd years older :-). One option to try is splitting the Java source, so that the two classes with the same names are in different sub-projects. If you then use different prefixes for each sub-project, the names will be disambiguated.
A more fragile, but maybe simpler approach is to define a separate rule for the one of the two classes, so that it can have a unique prefix assigned. Then add an early build phase to translate it before any other Java classes, so the rules don't overlap.
For me, the second alternative does work (Xcode 7.3.x) - to a point.
My rule is not for Java, but rather for Google Protobuf, and I tried to maintain the same hierarchy (like your Java package hierarchy) in the generated code as in the source .proto files. Indeed files (.pb.cc and .pb.h) were created as expected, with their hierarchies, inside the Build/Intermediates/myProject.build/Debug/DerivedSources directory.
However, Xcode usually knows to continue and compile the generated output into the current target - but that breaks as it only looks for files in the actual ${DERIVED_FILE} - not within sub-directories underneath.
Could you please explain better "Output Files are transformed into SCRIPT_OUTPUT_FILE_X" ? I do not understand.

mvn sourceanalyzer plug-in not running scan

I am using maven with a Fortify360 plug-in to analyze the source code. The sca:translate step runs fine and seems to generate the correct sca-translate-java.txt files, but the sca:scan step does not actually run the scan on any of the sub-projects.
I am given no reason why, just error message like :
* Skipping scan of sub-project
I am new to Fortify. Anyone have experience with this, and have some ideas for why it could be skipping the scans?
Thanks!
If your projects are inheriting from a top level pom, you need to also use the -Dsca.toplevel=foo parameter, and also you need to set the build ID manually.
So in the translate step, add the extra -D parameter to set a build id.
In the scan step, add the same -D parameter to set a build id.
Also in the scan step, add the "top level" -D parameter.
As it is nicely self-documented in sources here ScanMojo if you want aggregated result for entire project you need to specify both <buildId>...</buildId>and <toplevelArtifactId>...</toplevelArtifactId> and they should match otherwise it skips sub-projects.

Resources