Is there a way or a function in xslt by which we can encrypt and decrypt the password. For example in my case.
encypted format(where the plain text is not visible to anyone .. and its is replaced by some random alphanumeric value)
i created a xslt file mentioned below and i am using the credentials from this file in my OSB service
But the requirement it instead of plain text this password needs to be in encrypted format and then i have to decrypt it in my code when i use it
Is there any way to do it using xslt or custom template or xpath function
<xsl:template match="/">
<aut:AutenticateRequest xmlns:aut="http://****/xsd/authenticate">
<aut:username>xyz</aut:username>
<aut:password>abcdef</aut:password>
<aut:uinqueMachineID></aut:uinqueMachineID>
</aut:AutenticateRequest>
</xsl:template>
There's a general principle in security that you should never try to implement encryption yourself - even if you implement the best algorithms, you are likely get it wrong. Always use carefully tested encryption library code.
XSLT 1.0 does not have any standard functions for encryption and decryption in its function library (and in fact, neither do later versions). So this implies that you will have to call out to another language such as C#, Java, or JS -- which is something that most XSLT processors allow, though the details vary from one processor to another.
Related
I've a question regarding owasp ESAPI interface for xss protection.To keep it simple short and straight I'm doing a source code review using fortify.
The application implement ESAPI and make call to ESAPI.encoder().canonicalize(user input) and does not do any further validation and prints the output. Is this still vulnerable to xss
PS: The reflection point is inside a html element.
I've gone through all the posts regarding ESAPI interface in stack overflow, but couldn't quite get it
Any help would be appreciated
canonicalize alone doesn't prevent xss at all. It decodes data, but you want the opposite, to encode the data.
Not only does it allow content like <script>alert(1)</script> straight-through, but it also decodes <script>alert(1)</script> from a non-executable script to a executable one.
The method you want instead is encodeForHTML. This will encode the data so it can be inserted safely into an HTML context, so < will become < and so on.
Also, check if you're already doing HTML encoding by checking whether these characters are accepted. Some templating languages and tags do encoding automatically.
I have a legacy VB application that still has some life in it, and I am wanting to translate it to another language.
I plan to write a Ruby script, possibly utilising a parser, to extract all strings from the three million lines of source, replace them with constants, and move them to a string resource file that can be used to provide translations.
Is anyone aware of a script/library that could be used to intelligently extract the strings?
I'm not aware of any existing off-the-shelf tool that you could use. We created a tool like this at my work and it worked well. The FRM file format is quite simple (although only briefly documented). We wrote a tool that (1) extracted all strings from control definitions and (2) generated the code to reload them at runtime during Form_Load.
There must exist some application to do the following, but I am not even sure how to google for it.
The dilemma is that we have to backtrace defects and in doing so this requires to see how certain fields in the output xml have been generated by the XSL. The hard part is spending hours in the XSL and XML trying to figure out where it was even generated. Even debugging is difficult if you are working with multiple XSL transformation and edits as you still need to find out primary keys that get in the specific scenario for that transform.
Is there some software program that could take an XSL and perhaps do one of two things:
Feed it an output field name and it would generate a list of all
the possible criteria that would generate this field so you can figure out which one of a dozen in the XSL meets your criteria, or
Somehow convert the xsl into some more readable if/then type
format (kind of like how you can use Javadoc to produce readable documentation)
You don't say what tools you are currently using. Tools like oXygen and Stylus Studio have some quite sophisticated XSLT debugging capability. OXygen's output mapping tool (see http://www.oxygenxml.com/xml_editor/working_with_xslt_debugger.html#xsltOutputMapping) sounds very like the thing you are asking for.
Using schema-aware stylesheets can greatly ease debugging. At least in the Saxon implementation, if you declare in your stylesheet that you want the output to be valid against a particular schema, then if it isn't, Saxon will tell you what instruction in the stylesheet caused invalid output to be generated. Sometimes it will show you the error at stylesheet compile time, before you even supply a source document. This capability is greatly under-used, in my view. More details here: http://www.stylusstudio.com/schema_aware.html
It's an interesting question. Your suggestions are also interesting but would be quite challenging to develop; I know of no COTS or FOSS solution to either, but here are some thoughts:
Your first possibility is essentially data-flow analysis from
compiler design. I know of no tools that expose this to the user,
but you might ask XSLT processor developers if they have ever
considered externalizing such an analysis in a manner that would be useful to XSLT
developers.
Your second possibility is essentially a documentation generator
against XSLT source. I have actually helped to complete one for a client in
financial services in the past (see Document XSLT Automatically), but the solution was the property of
the client and was never released publicly as far as I know. It
would be possible to recreate such a meta-transformation between
XSLT input and HTML or Docbook output, but it's not simple to do in the
most general case.
There's another approach that you might consider:
Tighten up your interface definition. In your comment, you mention uncertainty as to whether a problem's source is bad data from the sender or a bug in the XSLT. You would be well-served by a stricter interface definition. You could implement this via better typing in XSD, addition of xsd:assertion statements if XSD 1.1 is an option, or adding a Schematron-based interface checking level, which would allow you the full power of XPath-based assertions over the input. Having such an improved and more specific interface definition would help both you and your clients know what should and should not be sent into your systems.
I have a strange problem. I am generating an XSD to XSD mapping in MapForce and it is valid and producing output. However when the XSLT is utilized by our DataPower folks, they are saying the namespace prefixes in the XSLT are causing the code to not find the nodes in the incoming message.
For example in the XSLT, the select is:
<xsl:for-each select="ns0:costOrderHeaderLookupResponse/return/ns1:Order">
In the incoming message, the namespace prefix is as below:
*snip*
<return>
<ns2:Order BillToID="300850001000" DocumentType="0001"....*snip*>
However MapForce is generating the output just fine with no errors even with the namespace prefix difference.
The DataPower folks are requesting that instead of the namespace prefix I customize MapForce to output the nodes like this:
/*[local-name()='Order']
I read the MapForce documentation and googled for awhile but I am not finding a way to customize XSLT output like this. It is possible for C/Java/etc but I am not finding any help on changing how the XSLT is generated.
Create a filter in MapForce and use a boolean function (like core:logical functions:equal) to check to see if the local-name of the node in the select (costOrderHeaderLookupResponse/return/Order) has a local-name equal to a constant string with value Order. The function to check for the local-name should be in the xslt:xpath functions library as local-name.
The filter should replace your connection from the Orders node to whatever node it is mapped to in the second XSD.
To see how filters work (assuming you aren't already using one to get your select) view http://manual.altova.com/Mapforce/mapforcebasic/index.html?mfffilteringdata.htm
The XSLT transformation is done through dot net code using API provided by Saxon. I am using Saxon 9 home edition api. The XSLT version is 2.0 and generates xml output. The input file size is 123 KB.
The XSLT adds attributes to the input XML file depending on certain scenarios. There are total 7 modes used in this XLST. The value of attribute generated in one mode is used in another mode and hence multiple modes are used.
The output is correctly generated but it takes around 10 second to execute this XSLT. When same XSLT executed in 'Altova XMLSpy 2013', it took around 3-4 seconds.
Is there a way to further reduce this 10 second execution time? What could be the the cause for this much time for execution?
The XSLT is available at below link for download.
XSLT Link
Without having a source document to run this against (and therefore to make measurements) it's very hard to be definitive about where the inefficiencies are, but the most obvious at first glance is the weird testing of element names in patterns like:
match="*[name()='J' or name()='H' or name()='F' or name()='D' or name()='B' or name()='I' or name()='G' or name()='E' or name()='C' or name()='A' or name()='X' or name()='Y' or name()='O' or name()='K' or name()='L' or name()='M' or name()='N']
which in Saxon would be vastly more efficient if written the natural way as
match="J|H|F|D|B|I|G|E|C|A|X|Y|O|K|L|M|N"
It's also more likely to be correct that way, since comparing name() against a string is sensitive to the chosen prefix, and XSLT code really ought to work whatever namespace prefix the source document author has chosen.
The reason the latter is much more efficient is that Saxon organizes the source tree for rapid matching of elements by name (meaning namespace URI plus local name, excluding prefix). When you match by name in this way, the matching template rule can be found by a quick hash table lookup. If you use predicates that have to be evaluated by expanding the name (with prefix) as a string and comparing the string, not only does each comparison take longer, it can't be optimized with a hash lookup.