Issue with Jmeter Regular Expression Extractor - jmeter

I'm attempting to pull the text following,
<script id="startupVarsScript">
startupVars = {
startup:
Using the regular expression extractor but im not having any joy.
Iv tried the following regular regular expressions
<script id="startupVarsScript">
startupVars = {
startup:
startupVars = {
startup:

You need to add white space catcher (\s+) between text, full regex test:
<script id="startupVarsScript">(\s+)startupVars(\s+)=(\s+){(\s+)startup

Related

Why does the browser tell me the parse is not defined even if I have imported it in the head of the HTML file?

I'm trying to connect my HTML files with the parser server. I followed the direction of the back4app guides and added the following code to the head of index.html. But the browser kept telling me Parse is not defined.
<script src="https://npmcdn.com/parse/dist/parse.min.js"></script>
<script type="text/javascript">
Parse.serverURL = "https://parseapi.back4app.com";
Parse.initialize(
"MY_APP_ID",
"MY_JS_KEY"
);
</script>
Can you please test the code below?
<script src="https://cdnjs.cloudflare.com/ajax/libs/parse/2.1.0/parse.js"></script>
<script type="text/javascript">
function myFunction() {
Parse.initialize("APP_ID", "JS_KEY");
Parse.serverURL = 'https://parseapi.back4app.com/';
}
/</script>

Scraping framework with xpath support

I'm looking for a web scraping framework that lets me
Hit a given endpoint and load the html response
Search for elements by some css selector
Recover the xpath for that element
Any suggestions? I've seen many that let me search by xpath, but none that actually generate the xpath for an element.
It seems to be true that not many people search by CSS selector yet want a result as an XPath instead, but there are some options to get there.
First I wound up doing this with JQuery plus an additional function. This is because JQuery has pretty nice selection and is easy to find support for. You can use JQuery in Node.js, so you should be able to implement my code in that domain (on a server) instead of on the client (as shown in my simple example). If that's not an option, you can look below for my other potential solution using Python or at the bottom for a C# starter.
For the JQuery approach, the pure JavaScript function is pretty simple for returning the XPath. In the following example (also on JSFiddle) I retrieved the example anchor element with the JQuery selector, got the stripped DOM element, and sent it to my getXPath function:
<html>
<head>
<title>The jQuery Example</title>
<script type="text/javascript"
src="http://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script type="text/javascript">
function getXPath( element )
{
var xpath = '';
for ( ; element && element.nodeType == 1; element = element.parentNode )
{
var id = $(element.parentNode).children(element.tagName).index(element) + 1;
id > 1 ? (id = '[' + id + ']') : (id = '');
xpath = '/' + element.tagName.toLowerCase() + id + xpath;
}
return xpath;
}
$(document).ready(function() {
$("#example").click(function() {
alert("Link Xpath: " + getXPath($("#example")[0]));
});
});
</script>
</head>
<body>
<p id="p1">This is an example paragraph.</p>
<p id="p2">This is an example paragraph with a <a id="example" href="#">link inside.</a></p>
</body>
</html>
There is a full library for more robust CSS selector to XPath conversions called css2xpath if you need more complexity than what I provided.
Python (lxml):
For Python you'll want to use lxml's CSS selector class (see link for full tutorial and docs) to get the xml node.
The CSSSelector class
The most important class in the lxml.cssselect module is CSSSelector.
It provides the same interface as the XPath class, but accepts a CSS
selector expression as input:
>>> from lxml.cssselect import CSSSelector
>>> sel = CSSSelector('div.content')
>>> sel #doctest: +ELLIPSIS <CSSSelector ... for 'div.content'>
>>> sel.css
'div.content'
The selector actually compiles to XPath, and you can see the
expression by inspecting the object:
>>> sel.path
"descendant-or-self::div[#class and contains(concat(' ', normalize-space(#class), ' '), ' content ')]"
To use the selector, simply call it with a document or element object:
>>> from lxml.etree import fromstring
>>> h = fromstring('''<div id="outer">
... <div id="inner" class="content body">
... text
... </div></div>''')
>>> [e.get('id') for e in sel(h)]
['inner']
Using CSSSelector is equivalent to translating with cssselect and
using the XPath class:
>>> from cssselect import GenericTranslator
>>> from lxml.etree import XPath
>>> sel = XPath(GenericTranslator().css_to_xpath('div.content'))
CSSSelector takes a translator parameter to let you choose which
translator to use. It can be 'xml' (the default), 'xhtml', 'html' or a
Translator object.
If you're looking to load from a url, you can do that directly when building the etree: root = etree.fromstring(xml, base_url="http://where.it/is/from.xml")
C#
There is a library called css2xpath-reloaded which does nothing but CSS to XPath conversion.
String css = "div#test .note span:first-child";
String xpath = css2xpath.Transform(css);
// 'xpath' will contain:
// //div[#id='test']//*[contains(concat(' ',normalize-space(#class),' '),' note ')]*[1]/self::span
Of course, getting a string from the url is very easy with C# utility classes and needs little discussion:
using(WebClient client = new WebClient()) {
string s = client.DownloadString(url);
}
As for the selection with CSS Selectors, you could try Fizzler, which is pretty powerful. Here's the front page example, though you can do much more:
// Load the document using HTMLAgilityPack as normal
var html = new HtmlDocument();
html.LoadHtml(#"
<html>
<head></head>
<body>
<div>
<p class='content'>Fizzler</p>
<p>CSS Selector Engine</p></div>
</body>
</html>");
// Fizzler for HtmlAgilityPack is implemented as the
// QuerySelectorAll extension method on HtmlNode
var document = html.DocumentNode;
// yields: [<p class="content">Fizzler</p>]
document.QuerySelectorAll(".content");
// yields: [<p class="content">Fizzler</p>,<p>CSS Selector Engine</p>]
document.QuerySelectorAll("p");
// yields empty sequence
document.QuerySelectorAll("body>p");
// yields [<p class="content">Fizzler</p>,<p>CSS Selector Engine</p>]
document.QuerySelectorAll("body p");
// yields [<p class="content">Fizzler</p>]
document.QuerySelectorAll("p:first-child");

Adding a model attribute to the html template without encoding

I am using spring boot(ver. 1.1.1.RELEASE) and trying to add a string model attribute in a html template.
the Controller:
#RequestMapping({"/", ""})
public String template(Model model) {
model.addAttribute("coolStuff", coolStuff);
return "panel/index";
}
the HTML template:
<script type="text/javascript" th:inline="text">
/*<![CDATA[*/
[[${coolStuff}]]
/*]]>*/
</script>
thymeleaf's th:inline in mode "text" worked very well for that before, but now it is adding HTML encoding (escaping characters) to the provided string.
th:inline in mode "javascript" escapes double quotes so that is not working eihter.
Is there any way to put a string from a model attribute without encoding in the html template?
You can use th:utext to disable Thymeleaf's escaping but I'm not aware of a way to use that in combination with th:inline. I think you can still achieve what you want, but you'd have to change the value of coolStuff to contain the entirety of the <script> block.

How to avoid LazyLoad effect on specific class?

Currently using following code found on web:
(using jq instead of $ sign)
<script type="text/javascript" src="jquery.min.js"></script>
<script type="text/javascript" src="jquery.lazyload.mini.js"></script>
<script type="text/javascript">
var jq = jQuery.noConflict();
jq(function() {
jq("img").lazyload({
placeholder : "template/eis_d25_022/common/grey.gif",
effect : "fadeIn"
});
});</script>
The above code just auto "Lazyload" everything in my forum without changing image class or image src etc. However some image with specific class doesn't load sometimes and sometimes it works:
<img src="straightlogo/what.png" class="vm" alt=" " original="straightlogo/what.png">
How to avoid Lazyload effect on vm class's image?? or how to make this vm class works??
You can exclude elements with .not() selector.
jq("img").not(".vm").lazyload({ ... });
However it would be easier if you just specify class of images which you want to be lazyloaded. For example you can use class lazy. Also note that with current version of the plugin you define the placeholder in src attribute and real image in data-original attribute. For example:
<img class="lazy" data-original=“img/example.jpg” src=“img/grey.gif” width=“640” height=“480”>
Then you could do something like:
jq("img.lazy").lazyload({ ... });

What is the purpose of $.jgrid.useJSON = true?

I'm saw this line a lot but can't find an answer:
$.jgrid.useJSON = true;
What is the purpose?
Typically I include jqGrid in the following way
<link rel="stylesheet" type="text/css" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.23/themes/redmond/jquery-ui.css" />
<link rel="stylesheet" type="text/css" href="http://www.ok-soft-gmbh.com/jqGrid/jquery.jqGrid-4.4.1/css/ui.jqgrid.css" />
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.23/jquery-ui.min.js"></script>
<script type="text/javascript" src="http://www.ok-soft-gmbh.com/jqGrid/jquery.jqGrid-4.4.1/js/i18n/grid.locale-en.js"></script>
<script type="text/javascript">
$.jgrid.no_legacy_api = true;
$.jgrid.useJSON = true;
</script>
<script type="text/javascript" src="http://www.ok-soft-gmbh.com/jqGrid/jquery.jqGrid-4.4.1/js/jquery.jqGrid.min.js"></script>
<script type="text/javascript" src="http://www.ok-soft-gmbh.com/jqGrid/json2.js"></script>
So one should first include grid.locale-en.js which defines $.jgrid, then you can set $.jgrid.useJSON and $.jgrid.no_legacy_api and the later implementation of jqGrid in jquery.jqGrid.min.js will use the settings.
The option $.jgrid.useJSON will be used in $.jgrid.parse for parsing JSON strings either with JSON.parse or with eval.
To be exact the method $.jgrid.parse will be used not so frequently. Mostly two cases are relevant:
parsing of input datastr in case if the value of datastr has "string" type and datatype: "jsonstring"
parsing of postData.filters (the filter parameter used with local datatype and for advanced searching)
parsing JSON response from the server for subgrids in case of subgridtype: "json"
inside of jqGridImport method implementation
So the usage of $.jgrid.useJSON = true; is recommended, but it will influence the performance of your program probably not really because the most important cases of JSON parsing will be made typically by jQuery internally (typically by jQuery.ajax) and not by jqGrid code.
Looking into sources makes all clean:
parse : function(jsonString) {
...
return ($.jgrid.useJSON===true && typeof (JSON) === 'object' && typeof (JSON.parse) === 'function')
? JSON.parse(js)
: eval('(' + js + ')');
}
So basically it says:
To parse a JSON string use JavaScript JSON API if possible instead of eval

Resources