Loadrunner Design Studio misses URL encoded variations of correlated parameters - correlation

We are a shop that uses LoadRunner and VuGen (recording in standard HTTP/HTML web mode), and have an issue where we have an application which uses long base64 parameters, where in some cases they are encoded (primarily with + turned into %2B), and some are not. Design studio finds only one or the other (depending on the server response), and correlates requests only using the source encoding, but not both.
For example, let's say that there's a value which needs to be passed back into the application. In the original response from the server which is the source of the correlation, the value is "ABCDEF+012345".
Now, in some cases it is submitted exactly like that, in which case Design Studio successfully correlates the server response and replaces the requests. However, in other cases it's submitted as "ABCDEF%2B012345" (URL encoded to replace + with %2B), in which case Design Studio does NOT correlate the requests which use that variation of the value.
Now it's not a big deal if it's only a single instance to manually add a conversion function, then search/replace the use of the parameters that did not get correlated. Unfortunately this is cumbersome and we have scripts that have about 100 parameters which are 200-400 characters long. So not only would it take a lot of time to fix this, but since they wrap in the editor frequently due to their length, search/replace isn't possible anyway.
Is there any way to have Design Studio correlate parameters when the used value may or may not be encoded? This seems like it should be a pretty common thing to do.

I recommend using TruClient protocol which solved all of the correlations issues.
If you insist on using HTTP protocol, in your case you rather not use the design studio. Get the parameter yourself using lr_reg_save_param command above the code line that calls the HTTP page.
This way, you can specify the left and right bounds yourself and not let the design studio use regex or other recognition methods that are sometimes problematic:
lr_reg_save_param("param1", "LB=textleftofyourparam", "RB=textrightofyourparam", LAST);

Related

Should the length of a URL string be limited to increase security?

I am using ColdFusion 8 and jQuery 1.7.2.
I am using CFAJAXPROXY to pass data to a CFC. Doing so creates a JSON array (argument collection) and passes it through the URL. The string can be very long, since quite a bit of data is being passed.
The site that I am working has existing code that limits the length of any URL query string to 250 characters. This is done in the application.cfm file by testing the length of the query string. If any query string is great than 250 characters, the request is aborted. The purpose of this was to ensure that hackers or other malicious code wouldn't be passed through the URL string.
Now that we are using the query string to pass JSON arrays in the URL, we discovered that the Ajax request was being aborted quite frequently.
We have many other security practices in place, such as stripping any "<>" tags from code and using CFQUERYPARAM.
My question is whether limiting the length of a URL string for the sake of security a good idea or is simply ineffective?
There is absolutely no correlation between URI length and security rather more a question of:
Limiting the amount of information that you provide to a user agent to a 'Need to know basis'. This covers things such as the type of application server you run and associated conventions, the web server you run and associated conventions and the operating system on the host machine. These are essentially things that can be considered vulnerabilities.
Reducing the impact of exploiting those vulnerabilities i.e introducing patches, ensuring correct configuration etc.
As alluded to above, at the web tier, this doesn't only cover GET's (your concern), but also POST's, PUT's, DELETE's on just about any other operation on a HTTP resource.
Moved this into an answer for Evik -
That seems (at best) completely unnecessary if the inputs are being properly sanitized. I'm sure someone clever can quickly defeat a "security by small doorway" defense, assuming that's the only defense.
OWASP has some good, sane guidelines for web security. As far as I've read, limiting the size of the url is not on the list. For more information, see: https://www.owasp.org/index.php/Top_10_2010-Main
I would also like to echo Hereblur's comment that this makes internationalization tricky, or maybe impossible.
I'm not a ColdFusion developer. But I think it's the same with other language.
I think It's help just a little bit. The problem of malicious code or sql injection should be handle by your application.
I agree that limited length of query string value is safer and add more difficult to hackers. But you cant do this with POST data. and It's limit some functionality. For example,
For one utf-8 character, It may take 9 characters after encoded. that's mean you can put only 27 non-english characters.
The only reason to limit has to do with performance and DOS attack - not security per se (though DOS is a security threat by bringing down your server). Web servers and App servers (including CF) allow you to limit the size of POST data so that your server won't be degraded by very large file uploads. URL data if substantial can result in long running requests as the server struggles to parse or handle or write or whatever.
So there is some modest risk here related to such things. Back in the NT days IIS 3 had a number of flaws that were "locked down" by limiting the length of the URL - but those days are long gone. There are far more exploits representing low hanging fruit that I would look at first before examining this issue too closely - unless of course you feel like you are hanging a specific problem with folks probing you (with long URLs I mean :).

What are your top 3 XPages performance tips for new XPages developers?

What 3 things would you tell developers new to XPages to do to help maximize the performance of their XPages apps?
Tim Tripcony had given a bunch of suggestion here.
http://www-10.lotus.com/ldd/xpagesforum.nsf/topicThread.xsp?action=openDocument&documentId=365493C31B0352E3852578FD001435D2#AEFBCF8B111E149B852578FD001E617B
Not sure if this tipp is for beginners, but use any of the LifeCyclePhaseListeners from the OpenNTF Snippets to see what is going on in your datasources during a complete or partial refresh (http://openntf.org/XSnippets.nsf/snippet.xsp?id=a-simple-lifecyclelistener-)
Use the extension Library. Report Bugs ( or what you consider a bug ) at OpenNTF.
Use the SampleDb from the extLib. ou can easily modify the samples to your own need. Even good for testing if the issue you encounter is reproducable in this DB.
Use Firebug ( or a similar tool that comes with the browser of your choice ) If you see an error in the error tab, go and fix it.
Since you're asking for only 3, here are the tips I feel make the biggest difference:
Determine what your users / customers mean by "performance", and set the page persistence option accordingly. If they mean scalability (max concurrent users), keep pages on disk. If they mean speed, keep pages in memory. If they want an ideal mixture of speed and scalability, keep the current page in memory. This latter option really should be the server default (set in the server's xsp.properties file), overridden only as needed per application.
Set value bindings to compute on page load (denoted by a $ in the source XML) wherever possible instead of compute dynamically (denoted by a #). $ bindings are only evaluated once, # bindings are recalculated over and over again, so changing computations that only need to be loaded once per page to $ bindings speed up both initial page load and any events fired against the page once loaded.
Minimize the use of SSJS. Wherever possible, use standard EL instead (e.g. ${database.title} instead of ${javascript:return database.getTitle();}). Every SSJS expression must be parsed into an abstract syntax tree to be evaluated, which is incrementally slower than the standard EL resolver.
There are many other ways to maximize performance, of course, but in my opinion these are the easiest ways to gain noticeable improvement.
1. Use the Script Library instead writing a bulk of code into the Xpage.
2. Use the Theme or separate CSS class for each elements [Relational]
3. Moreover try to control your SSJS code. Because server side request only reduce our system performance.
4. Final point consider this as sub point of 3, Try to get the direct functions from our SSJS, Don't use the while llop and for loop for like document collection, count and other things.
The basics like
Use the immediate flags ( or one of the other flags) on serverside events if possible
Check the Flag which (forgot its name..) generates the css and js as
one big file at runtime therefore minimizing the ammount of
requests.
Choose your scope wisely. Dont put everything in your sessionscope but define when, where and how your are using the data and based on that use the correct scope. This can lead to better memory usage..
And of course the most important one read the mastering xpages book.
Other tips I would add:
When retrieving data use viewentrycollections or the viewnavigstor
Upgrade to 8.5.3
Use default html tags if possible. If you dont need the functionality of a xp:div or xp:panel use a <div> instead so you dont generate an extra uicomponent on the tree.
Define what page persistance mode you need
Depends a lot what you mean by performance. For performance of the app:
Use compute on page load wherever feasible. It significantly improves performance.
In larger XPages particularly, combine code into single controls where possible. E.g. Use a single Computed Field control combining literal strings, EL and SSJS rather than one control for each language. On that point, EL performs better than SSJS, and SSJS on the XPage performs better than SSJS in a Script Library.
Use dataContexts for properties that are calculated more than once on an XPage.
Partial Execution mode is a very strong recommendation, but probably beyond new XPages developers at this point. Java will also perform better than SSJS in a Script Library, but again beyond new developers. XPages controls you've created with the Extensibility Framework should perform better, because they should run fewer lines of Java than multiple controls, but I haven't tested that.
If you mean performance of the developer:
Get the Extension Library.
Use themes to set default properties, e.g. A standard style for all your pagers.
Use Firebug. If you're developing for Notes Client or IE, still use Firebug. You'll spend longer suffering through Client/IE thank you will fixing the few quirks that will remain.

Most performant live search technique for mobile safari

I am building a mobile web application that targets webkit. I have a requirement to perform a live search (on keypress) against a database of ~5000 users.
I've tried a number of different techniques:
On page load, making an AJAX call which loads an in-memory representation of all 5000 users, and querying them on the client. I tried sending JSON, which proved to be too large, and also a custom delimited string, which was then parsed using split(). This was better, but ultimately searches against this array of users was slow.
I tried using a conventional AJAX call, which would return users based on a query, also using the custom delimited string technique. This was better, but I was forced to tune it so that searches were only performed with a minimum of 3 characters. This is not optimal, as I would like to be able to start filtering after 1 character. I could also throttle the calls so that not every keystroke within a certain threshold triggered a request. This could help with performance, but I'd rather not have to fiddle with that sort of thing.
Facebook mobile does this very well if you try their friend search. Searches happen instantaneously, and are triggered after 1 character.
My question is, does anyone have any suggestions for faster live searches for a mobile app? Should I be looking at localStorage? Is this reliable, feasible?
Is there any reason you can't use a binary search? The names you're looking for should be in a block. If you want first and last name search, you could create a second copy of the data sorted by last name and look in both sets.
Some helpful but more complicated data structures that address this type of problem include:
http://en.wikipedia.org/wiki/Directed_acyclic_word_graph
http://en.wikipedia.org/wiki/Trie

Client side processing with javascript vs server side with mod_perl

I have a perl script that converts strings to different encodings, like base64, ASCII or hex (both ways). Now I am writing ajax front end for it, and my question is; if I want to automate the detection of the encoding of the string submitted, is it more efficient to perform regex search on the string submitted with javascript before I send it to the server, or is it faster to leave it for the perl script to figure out what type of string?
To clarify, I am asking which of these two is better:
String submitted
Javascript detects the encoding
AJAX submits encoding and the string to perl script
Perl script returns decoded string
or
String submitted
AJAX submits the string to perl script
Perl script detects encoding and returns decoded string
Is there a particular rule of thumb where this type of processing should be performed, and what do you think is better (meaning faster) implementation?
You must validate your data on the server. Period. Otherwise you'll be sailing off into uncharted waters as soon as some two-bit wannabe "hacker" passes you a base64 string and a tag claiming that your javascript thinks it's hex.
Given this, it's up to you whether you want to also detect encoding on the client side. This has some potential benefits, since it allows you to not send data to the server at all if it's encoded in an invalid fashion or to tell the user what encoding was detected and allow them to correct it if it's an ambiguous case (e.g., hex digits are a subset of the base64 character set, so any hex string could potentially be base64). Just remember that, if an encoding gets passed to the server by the client, the server must still sanity-check the received encoding specifier and be prepared to ignore it (or reject the request completely) if it's inappropriate for the corresponding data.
This depends on the scale.
If there will be a LOT of client requests to do this, it's definitely "faster" to do it on the client side (e.g. in JS before the Ajax call), since putting it on the server side causes the server to process ALL those requests whch will compete for server's CPU resources, whereas client side you will only do one detection per client.
If you only anticipate very few concurrent requests, then doing it in Perl is probably marginally faster since Perl's regex implementation is likely better/faster than JavaScript (I don't have any stats to back this up, though) and presumably the server has better CPU.
But I would not really think that the server side margin would be terribly big considering the whole processing shouldn't take that long on either side, so I'd advise to go with client-side checking since that (as per the first paragraph) scales better.
If the performance difference between the two really matters to you a lot, you should actually implement both and benchmark under both the average anticipated and the maximum projected client loads.

Does soCaseInsensitive greatly impact performance for a TdxMemIndex on a TdxMemDataset?

I am adding some indexes to my DevExpress TdxMemDataset to improve performance. The TdxMemIndex has SortOptions which include the option for soCaseInsensitive. My data is usually a GUID string, so it is not case sensitive. I am wondering if I am better off just forcing all the data to the same case or if the soCaseInsensitive flag and using the loCaseInsensitive flag with the call to Locate has only a minor performance penalty (roughly equal to converting the case of my string every time I need to use the index).
At this point I am leaving the CaseInsentive off and just converting case.
IMHO, The best is to assure the data quality at Post time. Reasonings:
You (usually) know the nature of the data. So, eg. you can use UpperCase (knowing that GUIDs are all in ASCII range) instead of much slower AnsiUpperCase which a general component like TdxMemDataSet is forced to use.
You enter the data only once. Searching/Sorting/Filtering which all implies the internal upercassing engine of TdxMemDataSet it's a repeated action. Also, there are other chained actions which will trigger this engine whithout realizing. (Eg. a TcxGrid which is Sorted by default having GridMode:=True (I assume that you use the DevEx. components) and having a class acting like a broker passing the sort message to the underlying dataset.
Usually the data entry is done in steps, one or few records in a batch. The only notable exception is data aquisition applications. But in both cases above the user's usability culture allows way greater response times for you to play with. (IOW how much would add an UpperCase call to a record post which lasts 0.005 ms?) OTOH, users are very demanding with the speed of data retreival operations (searching, sorting, filtering etc.). Keep the data retreival as fast as you can.
Having the data in the database ready to expose reduces the risk of processing errors when you'll write (if you'll write) other modules (you need to remember to AnsiUpperCase the data in any module in any language you'll write). Also here a classical example is when you'll use other external tools to access the data (for ex. db managers to execute an SQL SELCT over the data).
hth.
Maybe the DevExpress forums (or ever a support email, if you have access to it) would be a better place to seek an authoritative answer on that performance question.
Anyway, is better to guarantee that data is on the format you want - for the reasons plainth already explained - the moment you save it. So, in that specific, make sure the GUID is written in upper(or lower, its a matter of taste)case. If it is SQL Server or another database server that have an guid datatype, make sure the SELECT make the work - if applicable and possible, even the sort.

Resources