Correlation value not capturing in loadrunner - correlation

I am doing correlation for my application, also I have placed it in the proper place, boundaries are correct, included web set, max. But I am still getting this error
Error -26377: No match found for the requested parameter
"c_datatable". Either the specified boundaries were not found in the
response or the matched text is longer than current max html parameter
size of 99999 bytes

Related

Jmeter: How to count numbers of rows returned from a search (response or gui)

In JMeter i need to perform a large search and count the number of rows which are returned. Max rows are 50000.
The number of rows which are returned are shown on the website after a search. "Number of returned rows: xx".
Or I can count the rows inside the HTTP response.
I have tried to use a regex post-processer to count the amount of rows which are returned, the problem is that JMeter freezes since the http-response is so large.
I have also tried to extract the text directly from the website unsuccesfully. I guess one cant do that since the information is not in the HTTP-response?
--So:
Is there some faster and less demanding way to counter all the returned rows inside a HTTP-response body?
Or is there some way to get the text directly from the website?
Thank you.
It looks like your application is buggy, I don't think that returning 50000 entries in a single shot is something people should be doing as there is creates extra network traffic and consumes a lot of resources on server and client(browser) side. I would rather expect some form of Pagination when it comes to operating large amounts of data.
If you're totally sure that your application works as expected you can try using Boundary Extractor which is available since JMeter 4.0
Due to the specifics of internal implementation it consumes less resources and acts faster than the Regular Expression Extractor therefore the load you will be able to conduct from a single machine will be higher.
Check out The Boundary Extractor vs. the Regular Expression Extractor in JMeter article for more information
yes you can get that count in matchNr which is coming after search string. use Regular expression to match any name or id,
do match No. -1
ex. regex variable name is totalcount so then you can fetch that count by using ${totalcount_matchNr}

What's the expected behavior of the Bing Search API v5 when deeply paginating?

I perform a bing API search for webpages and the query cameras.
The first "page" of results (offset=0, count=50) returns 49 actual results. It also returns a totalEstimatedMatches of 114000000 -- 114 million. Neat, that's a lot of results.
The second "page" of results (offset=49, count=50) performs similarly...
...until I reach page 7 (offset=314, count=50). Suddenly totalEstimatedMatches is 544.
And the actual count of results returned per-page trails off precipitously from there. In fact, over 43 "pages" of results, I get 413 actual results, of which only 311 have unique URLs.
This appears to happen for any query after a small number of pages.
Is this expected behavior? There's no hint from the API documentation that exhaustive pagination should lead to this behavior... but there you have it.
Here's a screenshot:
Each time the API is called, the search API obtains a group of possible matches starting at in the result set, and then filters out the results based on different parameters (e.g spam, duplicates, safesearch setting, etc), finally leaving a final result set.  If the final result after filtering and optimization is more than the count parameter then the number of results equal to count would be returned. If the parameter is more than the final result set count then the final result set is returned which will be less than the count parameter.  If the search API is called again, passing in the offset parameter to get the next set of results, then the filtering process happens again on the next set of results which means it may also be less than count.
 
You should not expect the full count parameter number of results to always be returned for each API call.  If further search results beyond the number returned are required then the query should be called again, passing in the offset parameter with a value equal to the number of results returned in the previous API call.  This also means that when making subsequent API calls, the offset parameter should never be a hard coded value and should always be calculated based on the results of previous queries. 
 
totalEstimatedMatches can also add to confusion around the Bing Search API results.  The word ‘estimated’ is important because the number is an estimation based on an initial quick result set, prior to the filtering described above.  Additionally, the totalEstimatedMatches value can change as you iterate through the result set by making subsequent API calls with increasing offset values.  The totalEstimatedMatches should only be used as a rough guide indicating the magnitude of the possible result set, and it should not be used to determine the number of results that will ultimately be returned.  To query all of the possible results you should continue making API calls, passing in offset with a value of the sum of the results returned in previous calls, until that sum is greater than totalEstimatedMatches of the most recent API call.
 
Note that you can see this same behavior by going to bing.com directly and using a query such as https://www.bing.com/search?q=bill+gates&count=50.  Notice that you will get around 34 results with a totalEstimatedMatches of ~567,000 (valid as of June 2017, future searches may change), and if you click the 'next page' arrow you will see that the next query executed will start at the offset of the 34 returned in the first query (ie. https://www.bing.com/search?q=bill+gates&count=50&first=34).  If you click ‘next’ several more times you may see the totalEstimatedMatches also change from page to page.
This seems to be expected behavior. The Web Search API is not a crawler API, thus it only delivers results, that the algorithms deem relevant for a human. Simply put, most humans won't skim through more than a few pages of results, furthermore they expect to find relevant results on the first page.
If you could retrieve the results in the millions, you could simply copy their search index and Bing would be out of business.
Search indices seem to be things of political and economic power, as far as I know there are only four relevant search indices world wide: from Google, from Microsoft (Bing), from Russia, and from China.
Those who control the search, control the Spice... ;-)

PLSQL form submit exceeding apache limit

I have inherited a PLSQL application. The application works fine as long as the values passed to the submit procedure do not exceed a certain number. I have checked the variables through fiddler when an html 400 occurs and the number of variables passed are quite large. The procedure IN consists of 12 parameters. This one issue that causes the error the user is passing 283 sets of 12. If I manually limit the number to 195 sets or less, no error returned. I'm not running out of array memory but I think I am hitting some kind of server side limit to process the post. I have tried talking with the server team based on the information from this link
What is the maximum length of a URL in different browsers?
but they don't seem to know how to do this.
I am looking for alternate ways to pass the values to maybe a temp table or store all the variable in their respective arrays before passing instead of this one long url string of 3396 variable and values.
If anyone understand what I am talking about any advice would be greatly appreciated.

why Loadrunner Correlation is getting failed

I want to correlate this 181-418-5889 in the following statement: regSend&transferNumber=181-418-5889".
I used the regular web_reg_save_param: But it failed... any suggestion?
You are using the statement in the wrong location, such as using it just before the request is sent containing the correlated value versus just before the location where the response containing the value is sent to the client
You are not receiving the correct page response and as a result you may not be able to collect the value. The page may be an HTTP 200 page but the content could be completely off. Always check for an appropriate expected result
Your left boundary, right boundary and other parameters are incorrect to collect the value you need
You have not been through training and you are being forced by your management to learn this tool via trial and error
1- I am not using the statement in the wrong location since I did find the needed value I want to correlate via the Tree function and put it just before the statement that hold this value
2- The Page is not an HTTP 200
3- The Left and right boundary are correct since I checked the text if it does exist twice in the response body.
4- I know the tool (Loadrunner) but in fact, the application is developed under ZK platform and I am not sure if ZK and Loadrunner are compatible knowing that I did implement the dtid function in my script to have a static desktop id each time I replay the process.

Google Places API inconsistency

Adding expected types parameter changes response result in unexpected way.
Request 1: https://maps.googleapis.com/maps/api/place/search/json?location=38.4551,-122.672045&radius=100&sensor=false&key=
Request 2: https://maps.googleapis.com/maps/api/place/search/json?location=38.4551,-122.672045&radius=100&sensor=false&types=park&key=
Both requests should return place with name: "Howarth Park" since it is of a type park. And the funny thing is that increasing radius=500 will bring back the expected result. But then how come it is returning it in the first place with no types parameter and the same radius?
This is due to the way the Google Places API processes Search Requests.
The Places API will return up to 20 establishment results within the specified radius. Additionally, area identity results may be returned to to help identify the area the establishments are located.
If no type has been specified in the Places API Search Request, these area identity results are not strictly limited to the radius specified in the request, however when a type has been specified, additional area identity results are strictly limited to the radius specified in the request.

Resources