We are developing an MVC .NET Core project with VS19.
We are also using Devextreme.
We have several cshtml files with devextreme components and templates.
Templates in DevExtreme ASP.NET MVC Controls support ERB-style syntax. The following constructions are available.
<% code %> - executes the code.
<%= value %> - prints the value as is (for example, John).
<%- value %> - prints the value escaping HTML (John becomes
<b>John</b>).
Implementing Templates
In the past we only used the
<%= value %> - prints the value as is (for example, John).
which leaded into some injections in a pen test.
As a result we now only are using the
<%- value %> - prints the value escaping HTML (John becomes
<b>John</b>).
We are also using sonarqube. The idea now is that we want to create a rule that gets triggered if someone dont uses the correct escaping. But how can i achieve that?
As far as i understand this topic after a day of research is, that you cant create rules for (cs)html & c# in sonarqube. Which leaded me to some research in writing a roslyn analyzer that exports it results to sonarqube. But i found out that even these ones dont get triggered by cshtml files.
Nevertheless i get some warnings in cshtml files if i create anonymous js functions:
Where does it come from? How can i create own rules that are applied on cshtml files?
kind regards
david
Your scenario may not be directly supported. I presume you reviewed Importing Issues from Third-Party Roslyn
Analyzers (C#, VB.NET)
SonarQube analyzes code based on (mapped) extension. Just guessing now ...
We have .cshtml mapped to language type HTML (YMMV). This is under Administration | Configuration | General Settings | HTML - {SQ_URL}/admin/settings?category=html (which also covers .jsp). Perhaps your is mapped to type JavaScript (JS) ? Try undo your mapping?
Related
We are a .NET shop (usually ASP.NET MVC) and we have a customer requirement for a static HTML site. As we have gone through this exercise the thing the only part of it that has gotten under my skin is the massive duplication that in a dynamic site we have many tricks for avoiding. Does anyone know of any libraries that would facilitate me developing my code in Razor or something similar, using partial views, master pages or similar tools but then be able to generate the output as a static HTML site.
I know that I could create a system to do so, but I have no need to recreate the wheel if others have already created it.
P.S.: I am not really interested in debating whether my customer SHOULD want a static site.
Sounds like you are looking for a template engine for Visual Studio.
I suggest looking at the built in one - T4.
In Visual Studio, a T4 text template is a mixture of text blocks and control logic that can generate a text file. The control logic is written as fragments of program code in Visual C# or Visual Basic. The generated file can be text of any kind, such as a Web page, or a resource file, or program source code in any language.
You can use this to create your static HTML files.
Alternatively, take a look at embedding Razor, as Rick Strahl describes.
Or even T4 with Razor as described by Mikael Söderström.
If you are looking for a general preprocessor/template engine, try PPWizard -- don't let the site scare you, its a nice tool.
Aside from T4MVC, does anyone use MvcContrib for MVC3 projects? We've decided to incorporate a prototype project that was built in MVC2 last year. It mainly uses the paging and sorting namespaces from MvcContrib, but also some fluent html helpers.
We want to upgrade the project to MVC3 and I am wondering if we should also try to remove some of the MvcContrib dependencies. Reasons to keep? Reasons to remove?
Yes, I use it in my projects. I use the Grid and the TestHelper extensively.
I am using TestHelper also, very useful and well written!
MvcContrib's strongly typed RedirectToAction gives you compile time errors if you delete or rename an action that you redirect to. With normal redirects, you're stuck with magic strings for action names, and as such the risk of overlooking a breaking change in your application.
The ModelStateToTempData attribute is also helpful as it lets you retain modelstate while you redirect from a update POST action back to the form page instead of returning a view directly from the update action (which is a bad practice).
I'm having trouble with an XSL translation in Chrome. I was wondering if there any tools that would allow me to step through the style sheet to see where the problem is.
Use node tests to check the results of XPath queries.
Use the document function to test file paths
Use the JavaScript console to run XPath queries on the XML data source
Use inline templates instead of xsl:include to eliminate path issues
Use comments to eliminate xsl:include statements referencing buggy templates
Use processing-instructions to comment blocks of code that have XML comments
Use an embedded stylesheet to workaround same-origin policy restrictions
Use input tags to print xsl:variable values.
Use attribute value templates to print unknown values
Use simplified stylesheets and parameterized XPath to modularize templates
Use Opera as a cross-reference, since it shows line numbers in its XSLT error messages.
On linux there is a tool called xsltproc which takes an XSL and XML and outputs the transform.
It also shows context around errors.
I've found this most useful when I'm developing as I can test the result of my changes without the need to have a development server up and running. It just works.
However, I've noticed that the results of the transform can differ from that of Chrome for example. I don't know why this is, whether my transform was non-conforming, if Chrome is non-conforming, or if xsltproc is non-conforming.
EDIT My comment about differences between Chrome and xsltproc rendering the transform slightly differently is likely invalid.
I had modified the XML schema somewhat, and since then, xsltproc was generating tags (based on type name of types in the schema) correctly, but Chrome was not.
I was doing hard reloads to avoid Chrome reusing the cache.
I could tell Chrome was using the new xsl as there was other changes included that were being rendered.
Only the schema related tests were not working in Chrome for some reason.
I've since found that now it is magically working, with no changes to the xsl, just on a different day.
So I guess some part of the xsl was being cached somehow (perhaps just the schema bit - totally guessing here)... hence why some debugging in Chrome would be super nice.
We have a very old application dating back to ASP era which we are gradually refactoring to ASP.NET + VB.NET codebase.
It contains a lots of files with the below types:
aspx, asmx, ascx, vb, js (JavaScript), html, vbs (VBScript).
The backend database is SQL Server 2005 with lots of sprocs.
We would like to create a code documentation automatically generated from the comments in the code files. I liked Doxygen very much but seems like it does not support the above technologies. Can you please suggest some document generator tools, preferably a single tool or a group of tools?
Thanks a lot.
Ajit.
You can take a look at Microsoft's Sandcastle tool. I've used it many times, and it generates documentation based on the comments provided in your .NET code. If I remember correctly, it can also generate documentation for JavaScript libraries.
There are some out there:
SandCastle
NDOC
i've used SandCastle and it works too good if you have xml comments in your code.
You first enable xml documentation in your project by setting it in Project Properties -> Compile -> Generate XML Documentation.
Once done you may have to set treat warnings as errors, so that the studio can point out to you where and all the XML comments are missing.
To add an XML Comment, you place your cursor before a class definition or a function definition and type
///
This will automatically generate xml tags for documentation and then once you are done, you can import the project and start to build the documentation.
The good part is, if you have documented your classes well, when you use those functions in your application upon mouse over you can find the description which you wrote, much like how intellisense documentation works.
Let me know if you run into any other issues.
My last suggestion, make a hello world project and xml document it and get used to sandcastle with it.
Recently, I thought about how can I improve the quality of the projects, by using Continuous checking of xHTML source at Continuous Integration machine.
Look, we have a project
http://sourceforge.net/projects/jtidy - jTidy
JTidy is a Java port of HTML Tidy, a HTML syntax checker and pretty printer.
It can validate the xHTML through a command-line interface. Or this tool can be extended in the way we need, because all source code are open.
We can overwrite every Selenium validation method, such as assertTextPresent, or any other, so it will be calling the jTidy(by providing current state's HTML source), and if some errors or warnings will occur - it can be saved to Continuous Integration machine build's logs - so any project's related can see this info.
We can not to rewrite all the Selenium methods, to integrate this call on every step, but to make this calls where we want(after DOM manupulations).
Yes, we can use W3C markup validators for our sites, but there isn't any possibility to validate not initial state of page's source with this validators. After page creation, there might be lots of DOM manipulations which can produce markup errors/warnings - we can find it immediately with this scheme.
One of the benefits of using Continuous integration is that you have quick feedback from code - how it integrates with existing code base, test whether unit and functional tests pass. Why not to get an additional useful info, such as instant xHTML markup validation status. The earlier we identify the problem, the easier to fix it.
I haven't found anything on this theme in google yet.
And want to know, what do you think about this idea?
Seems like a worthwhile idea.
I've done two similar things with CI before:
I've used Ant's XMLValidate task to validate static xhtml files as part of the build process
I've used httpunit to pull pages that I then parsed as xml
But the idea of tying into Selenium to validate the content inherently during a functional test run is novel to me.
I think, that idea is brilliant but it is very hard to implement it from scratch.
But this idea is like evolution of build/quality validation process, so it will be released as ready-to-use tool with documentation someday.
Good idea! - in fact I just had exactly the same idea and was just checking to see if anyone had done it before - argh! Looks like you beat me to it :)
I was thinking along the lines of capturing and auto-submitting each page hit by selenium to the w3c HTML and CSS validtors (by file rather than link so state is held) - failing on any errors. I like the jtidy idea though.
Great in principle, but I'm not quite sure how to call it from Selenium. I'd love to see documentation explaining how to run it from Selenese, or from PHPUnit.