Trying to read a csv file in Jmeter using Beanshell Sampler - jmeter

I am trying to read the cells in an xls file.This is the code I have.Please let me know where I am going wrong.I don't see any error in the logviewer but it isn't printing anything.
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVParser;
import org.apache.commons.csv.CSVRecord;
import java.io.IOException;
import java.io.Reader;
import java.nio.file.Files;
import java.nio.file.Paths;
public class ApacheCommonsCSV {
public void readCSV() throws IOException {
String CSV_File_Path = "C:\\source\\Test.csv";
// read the file
Reader reader = Files.newBufferedReader(Paths.get(CSV_File_Path));
// parse the file into csv values
CSVParser csvParser = new CSVParser(reader, CSVFormat.DEFAULT);
for (CSVRecord csvRecord : csvParser) {
// Accessing Values by Column Index
String name = csvRecord.get(0);
String product = csvRecord.get(1);
// print the value to console
log.info("Record No - " + csvRecord.getRecordNumber());
log.info("---------------");
log.info("Name : " + name);
log.info("Product : " + product);
log.info("---------------");
}
}
}

You're declaring readCSV() function but not calling it anywhere, that's why your code doesn't even get executed.
You need to add this readCSV() function call and if you're lucky enough it will start working and if not - your will see an error in jmeter.log file.
For example like this:
import java.io.IOException;
import java.io.Reader;
import java.nio.file.Files;
import java.nio.file.Paths;
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVParser;
import org.apache.commons.csv.CSVRecord;
public class ApacheCommonsCSV {
public void readCSV() throws IOException {
String CSV_File_Path = "C:\\source\\Test.csv";
// read the file
Reader reader = Files.newBufferedReader(Paths.get(CSV_File_Path));
// parse the file into csv values
CSVParser csvParser = new CSVParser(reader, CSVFormat.DEFAULT);
for (CSVRecord csvRecord : csvParser) {
// Accessing Values by Column Index
String name = csvRecord.get(0);
String product = csvRecord.get(1);
// print the value to console
log.info("Record No - " + csvRecord.getRecordNumber());
log.info("---------------");
log.info("Name : " + name);
log.info("Product : " + product);
log.info("---------------");
}
}
readCSV(); // here is the entry point
}
Just make sure to have commons-csv.jar in JMeter Classpath
Last 2 cents:
Any reason for not using CSV Data Set Config?
Since JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting so consider migrating to Groovy right away. Check out Apache Groovy - Why and How You Should Use It for reasons, tips and tricks.

Related

JMeter Create url dynamically based on a property

I have a setUp Thread Group which hit an url and get a lot of products id.
/product/4564
/product/4534
/product/1234
....
I saved this in a property like this:
// Using jsr223
import org.apache.http.HttpHeaders;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.HttpUriRequest;
import org.apache.http.client.methods.RequestBuilder;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.util.EntityUtils;
import org.apache.http.entity.StringEntity;
import com.google.gson.Gson;
List<String> sendRequest(String url, String method, Map<String,Object> body) {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(2000)
.setSocketTimeout(3000)
.build();
StringEntity entity = new StringEntity(new Gson().toJson(body), "UTF-8");
HttpUriRequest request = RequestBuilder.create(method)
.setConfig(requestConfig)
.setUri(url)
.setHeader(HttpHeaders.CONTENT_TYPE, "application/json;charset=UTF-8")
.setEntity(entity)
.build();
HttpClientBuilder.create().build().withCloseable {httpClient ->
httpClient.execute(request).withCloseable {response ->
String res = response.getEntity() != null ? EntityUtils.toString(response.getEntity()) : "";
return Arrays.asList("result", res);
}
}
}
Map<String,Object> map = new LinkedHashMap<>();
SampleResult.setIgnore();
def test1 = sendRequest("localhost:8080/product/list","GET", map);
ArrayList pathProduct = Arrays.toString(test1.get(1))
props.put("myProperty", pathProduct)
Then I have a throughtput controller inside another Thread group and that the reason to use a property instead of a variable. I read that if I use a variable won't be available on another thread.
Then I have a Http Request, and I set it like this:
protocol: http
server: localhost
path: ${__groovy(props.get("myProperty"))}
And it works partially because I'm getting only 1 url instead of N, the url I'm getting is:
http://localhost/product/4564/product/4534/product/1234
And I want to get:
http://localhost/product/4564
http://localhost/product/4534
http://localhost/product/1234
.....
Any idea? Thanks in advance
Finally, I created a file inside my script and I consumed it as a CSV Data Set Config and it worked. Thanks

natural language logic in stanford corenlp

How does one use the natural logic component of Stanford CoreNLP?
I am using CoreNLP 3.9.1 and I fed natlog as an annotator in command line, but I don't seem to see any natlog result in the output, i.e. OperatorAnnotation and PolarityAnnotation, according to this link. Does that have anything to do with the outputFormat? I've tried xml and json, but neither has any output on natural logic. The other stuff (tokenization, dep parse) is in there though.
Here is my command:
./corenlp.sh -annotators tokenize,ssplit,pos,lemma,depparse,natlog -file natlog.test -outputFormat xml
Thanks in advance.
I don't think any of the output options show the natlog stuff. This is more designed if you have a Java system and are working with the Annotations themselves in Java code. You should be able to see them by looking at the CoreLabel for each token.
This code snippet works for me:
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;
import edu.stanford.nlp.pipeline.Annotation;
// this is the polarity annotation!
import edu.stanford.nlp.naturalli.NaturalLogicAnnotations.PolarityDirectionAnnotation;
// not the one below!
// import edu.stanford.nlp.ling.CoreAnnotations.PolarityAnnotation;
import edu.stanford.nlp.util.PropertiesUtils;
import java.io.*;
import java.util.*;
public class test {
public static void main(String[] args) throws FileNotFoundException, UnsupportedEncodingException {
// code from: https://stanfordnlp.github.io/CoreNLP/api.html#generating-annotations
StanfordCoreNLP pipeline = new StanfordCoreNLP(
PropertiesUtils.asProperties(
// **add natlog here**
"annotators", "tokenize,ssplit,pos,lemma,parse,depparse,natlog",
"ssplit.eolonly", "true",
"tokenize.language", "en"));
// read some text in the text variable
String text = "Every dog sees some cat";
Annotation document = new Annotation(text);
// run all Annotators on this text
pipeline.annotate(document);
// these are all the sentences in this document
// a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for(CoreMap sentence: sentences) {
// traversing the words in the current sentence
// a CoreLabel is a CoreMap with additional token-specific methods
for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
// this is the text of the token
String word = token.get(TextAnnotation.class);
// this is the POS tag of the token
String pos = token.get(PartOfSpeechAnnotation.class);
// this is the NER label of the token
String ne = token.get(NamedEntityTagAnnotation.class);
// this is the polarity label of the token
String pol = token.get(PolarityDirectionAnnotation.class);
System.out.print(word + " [" + pol + "] ");
}
System.out.println();
}
}
}
The output will be:
Every [up] dog [down] sees [up] some [up] cat [up]

How to get the cursor of each entity in a App Engine datastore query without performance hit?

I have a Datastore query using cursor (Objectify v5) and I want to get the cursor after each item in the result list. Code looks like this:
public List<Puzzle> queryWithCursor(String cursor, String order, int limit) {
Query<Puzzle> query = ObjectifyService.ofy()
.load()
.type(Puzzle.class)
.order(order)
.limit(limit);
query = query.startAt(Cursor.fromWebSafeString(cursor));
List<Puzzle> puzzles = new ArrayList<>();
QueryResultIterator<Puzzle> iterator = query.iterator();
while (iterator.hasNext()) {
Puzzle puzzle = iterator.next();
puzzle.setCursor(iterator.getCursor().toWebSafeString());
puzzles.add(puzzle);
}
return puzzles;
}
While the method works correctly, it triggers so many Datastore queries behind the scene. Basically, every time iterator.getCursor() runs, it triggers an additional query. I learnt from Stackdriver Trace that if limit is 20, the method triggers 19 queries in total (it seems that the last .getCursor() does not trigger additional query). So this method is even slower and more costly than the similar query using offset.
Is this really a bug? Is there a way to avoid the performance hit?
This is actually a fundamental behavior of the datastore, at least in the old sdk (as opposed to the new sdk that Objectify 6 uses, which may be the same maybe not). Calling getCursor() at non-batch boundaries restarts the query. You can try it with the low-level API.
There is a workaround: Make up your own Cursor class. It should consist of the low level Cursor and an offset. Explicitly set a chunk() size, then your cursor should consist of the Cursor at index 0 plus an offset into the chunk.
Then when you want to restart a query at that cursor, use .cursor(batchStartCursor).offset(offsetIntoBatch).
import com.google.appengine.api.datastore.Cursor;
import com.google.appengine.api.datastore.DatastoreService;
import com.google.appengine.api.datastore.DatastoreServiceFactory;
import com.google.appengine.api.datastore.Entity;
import com.google.appengine.api.datastore.FetchOptions;
import com.google.appengine.api.datastore.PreparedQuery;
import com.google.appengine.api.datastore.Query;
import com.google.appengine.api.datastore.Query.SortDirection;
import com.google.appengine.api.datastore.QueryResultList;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class ListPeopleServlet extends HttpServlet {
static final int PAGE_SIZE = 15;
private final DatastoreService datastore;
public ListPeopleServlet() {
datastore = DatastoreServiceFactory.getDatastoreService();
}
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
FetchOptions fetchOptions = FetchOptions.Builder.withLimit(PAGE_SIZE);
// If this servlet is passed a cursor parameter, let's use it.
String startCursor = req.getParameter("cursor");
if (startCursor != null) {
fetchOptions.startCursor(Cursor.fromWebSafeString(startCursor));
}
Query q = new Query("Person").addSort("name", SortDirection.ASCENDING);
PreparedQuery pq = datastore.prepare(q);
QueryResultList<Entity> results;
try {
results = pq.asQueryResultList(fetchOptions);
} catch (IllegalArgumentException e) {
// IllegalArgumentException happens when an invalid cursor is used.
// A user could have manually entered a bad cursor in the URL or there
// may have been an internal implementation detail change in App Engine.
// Redirect to the page without the cursor parameter to show something
// rather than an error.
resp.sendRedirect("/people");
return;
}
resp.setContentType("text/html");
resp.setCharacterEncoding("UTF-8");
PrintWriter w = resp.getWriter();
w.println("<!DOCTYPE html>");
w.println("<meta charset=\"utf-8\">");
w.println("<title>Cloud Datastore Cursor Sample</title>");
w.println("<ul>");
for (Entity entity : results) {
w.println("<li>" + entity.getProperty("name") + "</li>");
}
w.println("</ul>");
String cursorString = results.getCursor().toWebSafeString();
// This servlet lives at '/people'.
w.println("<a href='/people?cursor=" + cursorString + "'>Next page</a>");
}
}

Include YAML files from snakeyaml

I would like to have YAML files with an include, similar to this question, but with Snakeyaml:
How can I include an YAML file inside another?
For example:
%YAML 1.2
---
!include "load.yml"
!include "load2.yml"
I am having a lot of trouble with it. I have the Constructor defined, and I can make it import one document, but not two. The error I get is:
Exception in thread "main" expected '<document start>', but found Tag
in 'reader', line 5, column 1:
!include "load2.yml"
^
With one include, Snakeyaml is happy that it finds an EOF and processes the import. With two, it's not happy (above).
My java source is:
package yaml;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import org.yaml.snakeyaml.Yaml;
import org.yaml.snakeyaml.constructor.AbstractConstruct;
import org.yaml.snakeyaml.constructor.Constructor;
import org.yaml.snakeyaml.nodes.Node;
import org.yaml.snakeyaml.nodes.ScalarNode;
import org.yaml.snakeyaml.nodes.Tag;
public class Main {
final static Constructor constructor = new MyConstructor();
private static class ImportConstruct extends AbstractConstruct {
#Override
public Object construct(Node node) {
if (!(node instanceof ScalarNode)) {
throw new IllegalArgumentException("Non-scalar !import: " + node.toString());
}
final ScalarNode scalarNode = (ScalarNode)node;
final String value = scalarNode.getValue();
File file = new File("src/imports/" + value);
if (!file.exists()) {
return null;
}
try {
final InputStream input = new FileInputStream(new File("src/imports/" + value));
final Yaml yaml = new Yaml(constructor);
return yaml.loadAll(input);
} catch (FileNotFoundException ex) {
ex.printStackTrace();
}
return null;
}
}
private static class MyConstructor extends Constructor {
public MyConstructor() {
yamlConstructors.put(new Tag("!include"), new ImportConstruct());
}
}
public static void main(String[] args) {
try {
final InputStream input = new FileInputStream(new File("src/imports/example.yml"));
final Yaml yaml = new Yaml(constructor);
Object object = yaml.load(input);
System.out.println("Loaded");
} catch (FileNotFoundException ex) {
ex.printStackTrace();
}
finally {
}
}
}
Question is, has anybody done a similar thing with Snakeyaml? Any thoughts as to what I might be doing wrong?
I see two issues:
final InputStream input = new FileInputStream(new File("src/imports/" + value));
final Yaml yaml = new Yaml(constructor);
return yaml.loadAll(input);
You should be using yaml.load(input), not yaml.loadAll(input). The loadAll() method returns multiple objects, but the construct() method expects to return a single object.
The other issue is that you may have some inconsistent expectations with the way that the YAML processing pipeline works:
If you think that your !include works like in C where the preprocessor sticks in the contents of the included file, the way to implement it would be to handle it in the Presentation stage (parsing) or Serialization stage (composing). But you have implemented it in the Representation stage (constructing), so !include returns an object, and the structure of your YAML file must be consistent with this.
Let's say that you have the following files:
test1a.yaml
activity: "herding cats"
test1b.yaml
33
test1.yaml
favorites: !include test1a.yaml
age: !include test1b.yaml
This would work ok, and would be equivalent to
favorites:
activity: "herding cats"
age: 33
But the following file would not work:
!include test1a.yaml
!include test1b.yaml
because there is nothing to say how to combine the two values in a larger hierarchy. You'd need to do this, if you want an array:
- !include test1a.yaml
- !include test1b.yaml
or, again, handle this custom logic in an earlier stage such as parsing or composing.
Alternatively, you need to tell the YAML library that you are starting a 2nd document (which is what the error is complaining about: expected '<document start>') since YAML supports multiple "documents" (top-level values) in a single .yaml file.

freemarker error: expected hash. evaluated instead to freemarker.template.SimpleScalar

My template looks like this:
<#assign senti = "${scmr.results[model]}">
<#if senti??>
<td>${senti} ---- ${senti.sentimentType}</td>
<td>${senti.score?html}</td>
</#if>
The output looks like this:
POSITIVE(1.0/1) ---- Expected hash. senti evaluated instead to freemarker.template.SimpleScalar on line 5, column 27 in com/addthis/sentiment/sentidemo.ftl.
the output text before "----" indicates that senti is, indeed, a valid java Sentiment object. Methods getSentimentType and getScore are present and working.
So, why am I getting the error?
With <#assign senti = "${scmr.results[model]}"> you have converted scmr.results[model] to a String (a scalar), that's why. Just write <#assign senti = scmr.results[model]>. In FreeMarker expressions you can inject value into a string literal, like "Hello ${name}!" (same as "Hello " + name + "!"), and "${someExpression}" is just a case of that. It's not like in JSP.
had the same error when using swagger generated models with ninjaframework , fixed by adding below class in conf package
package conf;
import com.google.inject.Inject;
import freemarker.ext.beans.BeansWrapper;
import freemarker.ext.beans.MethodAppearanceFineTuner;
import freemarker.template.Configuration;
import freemarker.template.DefaultObjectWrapperBuilder;
import ninja.NinjaDefault;
import ninja.template.TemplateEngineFreemarker;
/**
* Created by varya on 07/12/17.
*/
public class Ninja extends NinjaDefault {
#Inject
protected TemplateEngineFreemarker templateEngineFreemarker;
#Override
public void onFrameworkStart() {
super.onFrameworkStart();
Configuration freemarkerConfiguration = templateEngineFreemarker.getConfiguration();
DefaultObjectWrapperBuilder owb = new DefaultObjectWrapperBuilder(Configuration.VERSION_2_3_23);
owb.setMethodAppearanceFineTuner(new MethodAppearanceFineTuner() {
#Override
public void process(BeansWrapper.MethodAppearanceDecisionInput in, BeansWrapper.MethodAppearanceDecision out) {
out.setMethodShadowsProperty(false);
}
});
freemarkerConfiguration.setObjectWrapper(owb.build());
}
}

Resources