Get List of functions (with metrics) in My Solution - visual-studio-2013

I have a large assembly (written in VB.NET).
Is there a simple way (or tool) that will list all the functions with perhaps the size of each function (in respect to lines of code)?
I have downloaded nDepend but could not see that facility within it.

With NDepend you just have to do: NDepend > Search > Search Method by Size.
Notice that you can export this result to HTML, XML, txt, Excel.
Notice that a C# LINQ query is generated and you can edit it to refine it and eventually append more code metrics:
Notice also you can list methods according to more criterias than size:

Related

Is there a way of sorting indented, multi-line content alphabetically in Visual Studio Code (VSCode)?

I have a very long script that contains a huge list of brand names that is not in alphabetical order. The result of many different people working on it over a few years :) Question - is there a way to re-order lines of code alphabetically in Visual Studio Code (VSCode)? I know this is possible for single lines of code, but is it possible for multi-line blocks of code that have been indented? See attached example - so in this case I would be looking to select all these lines of code and sort by the brand name (acer, acqua, aeg, amazon etc) and retain the nested parts of the code for each. Many thanks in advance if anyone has any ideas or suggestions
Screenshot of code to be sorted
I tried sorting in VSCode, but it only sorts individual lines of code and mixes them together. It does not recognize multiline, nested blocks of code as being one group of code that can be sorted individually
[Assuming you have a json object or could your text into one. Looks very close already - if it isn't json you might consider making a separate file and make it json, so you can use a sorter.]
I see a number of json sorters in the Marketplace: just search for json sort.
This one has 280K+ downloads: Sort JSON Objects
And I see a couple more. There is no built-in way to do what you want, you will need an extension.
I would:
Replace new line + 3x - \n\t\t\t - with an empty string
Sort
Format document (you may need to Change Language Mode first)

exist-db: XQuery and documents with XInclude

I'm embarking on a new project with eXist. We'll be storing a few hundred TEI XML documents that represent manuscripts. A number of things we want to capture are repetitve, mainly people and places. My colleague has asked the TEI community about strategies for representing what we want to capture and using XInclude had been suggested as a way of reducing duplication.
I've had a quick play with adding an XInclude into a document and the serialized XML does render the include XML file. However, the included text was missing from an XQuery. I notice in the eXist docs (http://exist-db.org/exist/apps/doc/xinclude.xml) that:
eXist-db expands XIncludes at serialization time, which means that the
query engine will see the XInclude tags before they are expanded. You
therefore cannot query across XIncludes - unless you create your own
code (e.g. an XQuery function) for it. We would certainly like to
support queries over xincluded content in the future though.
What is the best practice for querying files that use XInclude?
I'm wondering whether I should have a 'job' that serializes the source TEI XML files to expand the XIncludes and store these files in a separate collection? In that case, would file:serialize be the correct function for this task?
We are at the start of the project, so any advice appreciated.
Can you describe what kind of query you tried that was missing the text?
Generally, since the files referenced via XInclude are well-formed xml documents, you can use collections (folders) to organise your queries in exist-db. So instead of for $search in doc("mydoc.xml") you could for $search in collection('/app/mydata')/*
more elaborate answers would follow the attribute of the unexpanded xinclude statement in source document and find the matching element in the target, but its difficult to abstract that without a concrete MWE.
have you tried to create a temporary and expanded fragment in a let clause, and query that instead of the stored xml?
Beware of namespaces !
Hope this helps, and greetings to Sebastiaan.

XSLT Performance Issue

The XSLT transformation is done through dot net code using API provided by Saxon. I am using Saxon 9 home edition api. The XSLT version is 2.0 and generates xml output. The input file size is 123 KB.
The XSLT adds attributes to the input XML file depending on certain scenarios. There are total 7 modes used in this XLST. The value of attribute generated in one mode is used in another mode and hence multiple modes are used.
The output is correctly generated but it takes around 10 second to execute this XSLT. When same XSLT executed in 'Altova XMLSpy 2013', it took around 3-4 seconds.
Is there a way to further reduce this 10 second execution time? What could be the the cause for this much time for execution?
The XSLT is available at below link for download.
XSLT Link
Without having a source document to run this against (and therefore to make measurements) it's very hard to be definitive about where the inefficiencies are, but the most obvious at first glance is the weird testing of element names in patterns like:
match="*[name()='J' or name()='H' or name()='F' or name()='D' or name()='B' or name()='I' or name()='G' or name()='E' or name()='C' or name()='A' or name()='X' or name()='Y' or name()='O' or name()='K' or name()='L' or name()='M' or name()='N']
which in Saxon would be vastly more efficient if written the natural way as
match="J|H|F|D|B|I|G|E|C|A|X|Y|O|K|L|M|N"
It's also more likely to be correct that way, since comparing name() against a string is sensitive to the chosen prefix, and XSLT code really ought to work whatever namespace prefix the source document author has chosen.
The reason the latter is much more efficient is that Saxon organizes the source tree for rapid matching of elements by name (meaning namespace URI plus local name, excluding prefix). When you match by name in this way, the matching template rule can be found by a quick hash table lookup. If you use predicates that have to be evaluated by expanding the name (with prefix) as a string and comparing the string, not only does each comparison take longer, it can't be optimized with a hash lookup.

Searching a list of keywords from text files in folders

I have compiled a list of db object names, one name per line, in a text file. I want to know for each names, where it is being used. The target search is a group of folders containing sub-folders of source codes.
Before I give up looking for a tool to do this and start creating my own, perhaps you can help to point to me an existing one.
Ideally, it should be a Windows desktop application. I have not used grep before.
use grep (there are tons of port of this command to windows, search the web).
eventually, use AgentRansack.
See our Source Code Search Engine. It indexes a large code base according to the atoms (tokens) of the language(s) of interest, and then uses that index to quickly execute structured queries stated in terms of language elememnts. It is a kind of super-grep, but it isn't fooled by comments or string literals, and it automatically ignores whitespace. This means you get a lot fewer false positive hits than you get with grep.
If you had an identifier "foo", the following query would find all mentions:
I=foo
For C and Java, you can constrain the types of identifier accesses to Use, Read, Write or Defines.
D=bar*
would find only declarations of identifiers which started with the letters "bar".
You can write more complex queries using sequences of language tokens:
'int' I=*baz* '['
for C, would find declarations of any variable name that contained the letters "baz" and apparantly declared an array.
You can see the hits in a GUI, and one-click navigate to a source code view of any hit.
It is a Windows application. It handles a wide variety of languages: C#, C++, Java, ... and many more.
I had created an SSIS package to load my 500+ source code files that is distributed into some depth of folders belongs to several projects, into a table, with 1 row as 1 line from the files (total is 10K+ lines).
I then made a select statement against it, by cross-applying the table that keeps the list of 5K+ keywords of db objects, with the help of RegEx for MS-SQL, http://www.simple-talk.com/sql/t-sql-programming/clr-assembly-regex-functions-for-sql-server-by-example/. The query took almost 1.5 hr to complete.
I know it's a long winded, but this is exactly what I need. I thank you for your efforts in guiding me. I would be happy to explain the details further, should anyone gets interested using my method.
insert
dbo.DbObjectUsage
select
do.Id as DbObjectId,
fl.Id as FileLineId
from
dbo.FileLine as fl -- 10K+
cross apply
dbo.DbObject as do -- 5K+
where
dbo.RegExIsMatch('\b' + do.name + '\b', fl.Line, 0) != 0

Eliminating code duplication in a single file

Sadly, a project that I have been working on lately has a large amount of copy-and-paste code, even within single files. Are there any tools or techniques that can detect duplication or near-duplication within a single file? I have Beyond Compare 3 and it works well for comparing separate files, but I am at a loss for comparing single files.
Thanks in advance.
Edit:
Thanks for all the great tools! I'll definitely check them out.
This project is an ASP.NET/C# project, but I work with a variety of languages including Java; I'm interested in what tools are best (for any language) to remove duplication.
Check out Atomiq. It finds code that is duplicate that is prime for extracting to one location.
http://www.getatomiq.com/
If you're using Eclipse, you can use the copy paste detector (CPD) https://olex.openlogic.com/packages/cpd.
You don't say what language you are using, which is going to affect what tools you can use.
For Python there is CloneDigger. It also supports Java but I have not tried that. It can find code duplication both with a single file and between files, and gives you the result as a diff-like report in HTML.
See SD CloneDR, a tool for detecting copy-paste-edit code within and across multiple files. It detects exact copyies, copies that have been reformatted, and near-miss copies with different identifiers, literals, and even different seqeunces of statements.
The CloneDR handles many languages, including Java (1.4,1.5,1.6) and C# especially up to C#4.0. You can see sample clone detection reports at the website, also including one for C#.
Resharper does this automagically - it suggests when it thinks code should be extracted into a method, and will do the extraction for you
Check out PMD , once you have configured it (which is tad simple) you can run its copy paste detector to find duplicate code.
One with some Office skills can do following sequence in 1 minute:
use ordinary formatter to unify the code style, preferably without line wrapping
feed the code text into Microsoft Excel as a single column
search and replace all dual spaces with single one and do other replacements
sort column
At this point the keywords for duplicates will be already well detected. But to go further
add comparator formula to 2nd column and counter to 3rd
copy and paste values again, sort and see the most repetitive lines
There is an analysis tool, called Simian, which I haven't yet tried. Supposedly it can be run on any kind of text and point out duplicated items. It can be used via a command line interface.
Another option similar to those above, but with a different tool chain: https://www.npmjs.com/package/jscpd

Resources