I have a web service I need to test with JMeter but the spec requires compressed json. I need to create the json from data in a CSV file. I have been able to work that out using a beanshell preprocessor and a CSV dataset config. But now I need a way to gzip the data and send it to the server. Is there some sampler out there that will do this?
Hackish Solution
The only working solution I've found is to compress the data from the beanshell script then have JMeter send the file, but this seems a bit nasty to me.
import com.eclipsesource.json.*;
import java.io.FileOutputStream;
import java.io.BufferedWriter;
import java.io.OutputStreamWriter;
import java.util.zip.GZIPOutputStream;
jsonObject = new JsonObject();
// populate json data here
GZIPOutputStream zip = new GZIPOutputStream(new FileOutputStream("c:\\json.gz"));
writer = new BufferedWriter(new OutputStreamWriter(zip, "UTF-8"));
jsonObject.writeTo(writer);
writer.close();
If there are no samplers that do the compression, is there a way to avoid writing the data to the file system? Should I make a post processor to delete the temp file? Thanks for your help!
None of the samplers I know will automatically zip for you.
You can use a ByteArrayOutputStream instead of FileOutputStream, then toString it to a jmeter variable or property, so you can use it in the sampler using the variable name.
import com.eclipsesource.json.*;
import java.io.ByteArrayOutputStream;
import java.io.BufferedWriter;
import java.io.OutputStreamWriter;
import java.util.zip.GZIPOutputStream;
jsonObject = new JsonObject();
// populate json data here
GZIPOutputStream zip = new GZIPOutputStream(new ByteArrayOutputStream());
writer = new BufferedWriter(new OutputStreamWriter(zip, "UTF-8"));
jsonObject.writeTo(writer);
writer.close();
vars.put("ZIPFILE", zip.toString());
Then in the Body Data section of your http request, refer to the variable ${ZIPFILE}
Related
I am using beanshell sampler to convert one pdf content to another pdf .
In beanshell sampler put this following code :
import java.io.BufferedInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
File file = new File("C:\\Users\\hp\\Downloads\\Instructions.pdf");
FileInputStream in = new FileInputStream(file);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
for (int i; (i = in.read(buffer)) != -1; )
{
bos.write(buffer, 0, i);
}
in.close();
byte[] pdfdata= bos.toByteArray();
bos.close();
vars.put("pdfdata",new String(pdfdata));
then use ${pdfdata} variable in beanshell post processor to write the content in another pdf
Beanshell PostProcessor code :-
FileWriter fstream = new FileWriter("newresult1.pdf",true);
BufferedWriter out = new BufferedWriter(fstream);
out.write(vars.get("pdfdata"));
out.close();
fstream.close();
File created but when open that file it's blank. No content is displayed in that file.
So can anyone please tell me how to fix this issue ??
Since JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting.
I fail to see any "conversion" in your code, it looks like you're just trying to make a copy of the file, with Groovy it will be only one line:
new File('newresult1.pdf').bytes = new File('C:\\Users\\hp\\Downloads\\Instructions.pdf').bytes
More information on Groovy scripting in JMeter: Apache Groovy: What Is Groovy Used For?
I have found no way in NiFi to extract attributes directly from Avro so I am using ConvertAvroToJson -> EvaluateJsonPath -> ConvertJsonToAvro as the workaround.
But I would like to write a script to extract the attributes from the Avro flow file for use in an ExecuteScript processor to determine if it is a better approach.
Does anyone have a script to do this? Otherwise, I may end up using the original approach.
Thanks,
Kevin
Here's a Groovy script (which needs the Avro JAR in its Module Directory property) where I let the user specify dynamic properties with JSONPath expressions to be evaluated against the Avro file. Ironically it does a GenericData.toString() which converts the record to JSON anyway, but perhaps there is some code in here you could reuse:
import org.apache.avro.*
import org.apache.avro.generic.*
import org.apache.avro.file.*
import groovy.json.*
import org.apache.commons.io.IOUtils
import java.nio.charset.*
flowFile = session.get()
if(!flowFile) return
final GenericData genericData = GenericData.get();
slurper = new JsonSlurper().setType(JsonParserType.INDEX_OVERLAY)
pathAttrs = this.binding?.variables?.findAll {attr -> attr.key.startsWith('avro.path')}
newAttrs = [:]
try {
session.read(flowFile, { inputStream ->
def reader = new DataFileStream<>(inputStream, new GenericDatumReader<GenericRecord>())
GenericRecord currRecord = null;
if(reader.hasNext()) {
currRecord = reader.next();
log.info(genericData.toString(currRecord))
record = slurper.parseText(genericData.toString(currRecord))
pathAttrs?.each {k,v ->
object = record
v.value.tokenize('.').each {
object = object[it]
}
newAttrs[k - "avro.path."] = String.valueOf(object)
}
reader.close()
}
} as InputStreamCallback)
newAttrs.each{k,v ->
flowFile = session.putAttribute(flowFile, k,v)
}
session.transfer(flowFile, REL_SUCCESS)
} catch(e) {
log.error("Error during Avro Path: {}", [e.message] as Object[], e)
session.transfer(flowFile, REL_FAILURE)
}
If you meant to extract Avro metadata vs fields (not totally sure what you meant by "attributes"), also check MergeContent's AvroMerge as there is some code in there to pull Avro metadata:
If you are extracting simple patterns from a single Avro record per flowfile, ExtractText may be sufficient for you. If you want to take advantage of the new record processing available in Apache NiFi 1.3.0, AvroReader is where you should start, and there are a series of blogs describing this process in detail. You can also extract Avro metadata with ExtractAvroMetadata.
Below is a script that helps me build an extentreport for jmeter. It is a JSR223 PostProcessor element. It's working nicely however, the problem is that I have it duplicated after every HTTP Request in the script. I have several scripts with 100's of HTTP requests that would need essentially a copy of the same PostProcessor groovy script. This = hard to maintain!
I have tried splitting common parts into an external groovy script that I tried calling on the JSR223 PostProcessor. I also tried chunking up the bits of the script and putting the values into a csv so that I could just update the csv values if anything changed.
I'm sure there's a cleaner/better way to do this but I'm still learning so I'm not sure of the best way to make this easier to maintain. Here's the JSR223 PostProcessor. The only bit that changes with each http request is the "//test result" section
import com.relevantcodes.extentreports.ExtentReports;
import com.relevantcodes.extentreports.ExtentTest;
import com.relevantcodes.extentreports.LogStatus;
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
//configure object for response data
def response = prev.getResponseDataAsString();
//configure extentreports objects
ExtentReports report;
ExtentTest testLogger;
//set location for file and report config
String resultsPath = "C:/portalQA/Automation/Results/";
String configPath = "C:/portalQA/Automation/JMeter/TestReportConfig/";
String reportPath =
resultsPath+"Login_Results_${reportDate}_${currentTime}_${testenv}.html";
File file = new File(reportPath);
if (!file.exists()) {
//if file does not exist, create it
report = new ExtentReports(reportPath, true);
report.loadConfig( new File(configPath+"extent-config.xml"));
} else {
//else append to existing report
report = new ExtentReports(reportPath, false);
report.loadConfig( new File(configPath+"extent-config.xml"));
}
//test result
testLogger = report.startTest("Authenticate");
testLogger.assignCategory("Initialize Session");
if (response.contains("com.blah.portal.model.User")) {
testLogger.log(LogStatus.PASS, "Logged in with: ${username}");
testLogger.log(LogStatus.INFO, response);
} else {
testLogger.log(LogStatus.FAIL, "Could not authenticate session");
testLogger.log(LogStatus.INFO, response);
}
log.info("Authenticate");
print("Authenticate print");
report.endTest(testLogger);
report.flush();
I see two options:
I suggest using JSR223 Listener instead. First of all, that way you will only have 1 listener in your script, which resolves your original problem, but it is a better option for writing into file in general, since listener has only one instance for all running threads, so you won't be creating a race condition when writing to file.
If you rather have a post-processor, you can put it on higher level (not under any particular sampler) which will cause it to run after each request within the same scope or below.
For example, configuration like
Thread Group
Post-processor
Sampler 1
...
Sampler N
Will cause Post-processor to run after each Sampler 1...Sampler N
In both cases you may need to check which sampler you are processing, and skip those you don't want to add to your report (easiest way to do it, is to come up with some name convention for excluded samplers)
I also faced the same challenge. In my case I need to check if JSON response from REST service was correct. I solved it in the following way.
I've created a JSR223 PreProcessor under the script root. It contains my custom class to handle JSON parsing and asserts.
import groovy.json.JsonSlurper
import org.apache.jmeter.assertions.AssertionResult
class CustomAssert {
def parseResponse(json) {
def jsonSlurper = new JsonSlurper()
return jsonSlurper.parseText(json)
}
def assertResult(assertionResult, expectedResult, actualResult) {
if (!expectedResult.equals(actualResult)) {
assertionResult.setFailure(true);
assertionResult.setFailureMessage("Expected ${expectedResult} but was ${actualResult}");
}
}
}
vars.putObject('customAssert', new CustomAssert())
Note the last line:
vars.putObject('customAssert', new CustomAssert())
I put an instance of my CustomAssert to vars.
Then under my HTTP Requests I've added JSR233 Assertion
def a = vars.getObject('customAssert')
def response = a.parseResponse(prev.getResponseDataAsString())
a.assertResult(AssertionResult, 'DRY', response.sensorResultHolderUIs[0].result.toString())
a.assertResult(AssertionResult, 'DRY', response.sensorResultHolderUIs[1].result.toString())
a.assertResult(AssertionResult, 'DRY', response.sensorResultHolderUIs[2].result.toString())
It basically retrieves the instance of CustomAssert from vars and calls its methods. I can put as many JSR233 Assertions as I want. The only code that is copied is those two lines on top:
def a = vars.getObject('customAssert')
def response = a.parseResponse(prev.getResponseDataAsString())
To sum up:
Take the common part of your code (that doesn't have to be copied).
Wrap it in a class.
Put the class in JSR233 PreProcessor under the root and export its instance via vars
Take the rest of your code and adjust it to use class defined in 2.
Put that code in as many JSR233 Assertions as you want remembering to retrieve the instance created in 3. from vars
Thank you user1053510. Your advice lead me to build my own JSR223 Listener that renders the report. Below is the code in my JSR223 Listener:
import com.aventstack.extentreports.*;
import com.aventstack.extentreports.reporter.*;
import com.aventstack.extentreports.markuputils.*;
ExtentHtmlReporter htmlReporter;
ExtentReports extent;
ExtentTest test;
// create the HtmlReporter
htmlReporter = new ExtentHtmlReporter("C:/AUTO_Results/Results_${testApp}_${reportDate}_${currentTime}_${testenv}.html");
//configure report
htmlReporter.config().setCreateOfflineReport(true);
htmlReporter.config().setChartVisibilityOnOpen(true);
htmlReporter.config().setDocumentTitle("${testApp} Results");
htmlReporter.config().setEncoding("utf-8");
htmlReporter.config().setReportName("${testApp} Results ${reportDate}_${currentTime}_${testenv}");
htmlReporter.setAppendExisting(true);
// create ExtentReports
extent = new ExtentReports();
// attach reporter to ExtentReports
extent.attachReporter(htmlReporter);
extent.setReportUsesManualConfiguration(true);
// Show Env section and set data on dashboard
extent.setSystemInfo("Tool","JMeter");
extent.setSystemInfo("Test Env","${testenv}");
extent.setSystemInfo("Test Date","${reportDate}");
extent.setSystemInfo("Test Time","${currentTime}");
//stringify test info
String threadName = sampler.getThreadName();
String samplerName = sampler.getName();
String requestData = props.get("propRequestData");
String respCode = props.get("propRespCode");
String respMessage = props.get("propRespMessage");
String responseData = props.get("propResponse");
// create test
test = extent.createTest(threadName+" - "+samplerName);
//test.assignCategory("API Testing");
// analyze sampler result
if (vars.get("JMeterThread.last_sample_ok") == "false") {
log.error("FAILED: "+samplerName);
print("FAILED: "+samplerName);
test.fail(MarkupHelper.createLabel("FAILED: "+sampler.getName(),ExtentColor.RED));
} else if (vars.get("JMeterThread.last_sample_ok") == "true") {
if(responseData.contains("#error")) {
log.info("FAILED: "+sampler.getName());
print("FAILED: "+sampler.getName());
test.fail(MarkupHelper.createLabel("FAILED: "+sampler.getName(),ExtentColor.RED));
} else if (responseData.contains("{")) {
log.info("Passed: "+sampler.getName());
print("Passed: "+sampler.getName());
test.pass(MarkupHelper.createLabel("Passed: "+sampler.getName(),ExtentColor.GREEN));
}
} else {
log.error("Something is really wonky");
print("Something is really wonky");
test.fatal("Something is really wonky");
}
//info messages
test.info("RequestData: "+requestData);
test.info("Response Code and Message: "+respCode+" "+respMessage);
test.info("ResponseData: "+responseData);
//playing around
//markupify json into code blocks
//Markup m = MarkupHelper.createCodeBlock(requestData);
//test.info(MarkupHelper.createModal("Modal text"));
//Markup mCard = MarkupHelper.createCard(requestData, ExtentColor.CYAN);
// test.info("Request "+m);
// test.info(mCard);
// test.info("Response Data: "+MarkupHelper.createCodeBlock(props.get("propResponse")));
// test.info("ASSERTION MESSAGE: "+props.get("propAssertion"));
// end the reporting and save the file
extent.flush();
Then in each threadgroup I have a BeanShell Assertion with these lines:
//request data
String requestData = new String(prev.SamplerData);
//String requestData = new String(requestData);
props.put("propRequestData", requestData);
//response data
String respData = new String(prev.ResponseData);
//String respData = new String(prev.getResponseDataAsString());
props.put("propResponse", respData);
//response code
String respCode = new String(prev.ResponseCode);
props.put("propRespCode",respCode);
//response message
String respMessage = new String(prev.ResponseMessage);
props.put("propRespMessage",respMessage);
error :_ jmeter.util.BeanShellInterpreter: Error invoking bsh method: eval org/json/simple/JSONArray
Import java method from project jar file which is placed in lib folder.
source code on written for pass json value in arraylist:
import jsonresponse.common.JsonResponseProcessor;
import assertions.AssertResponse;
import databaseresponse.common.DbResponseProcessors;
import org.json.simple.JSONArray;
String resJson = prev.getResponseDataAsString();
String res = resJson.get("queueId");
log.info("----->>>>"+resJson);
ArrayList list1 = new ArrayList();
list1.add("queueId");
list1.add("name");
list1.add("faxNumber");
list1.add("description");
list1.add("type");
ArrayList list2 = new ArrayList();
list2.add("userId");
ArrayList list3 = new ArrayList();
list3.add("agencyId");
ArrayList list4 = new ArrayList();
list4.add("usertypeId");
JsonResponseProcessor obj = new JsonResponseProcessor();
System.out.println("%%%%%%%%%%%%%%fgfggfffgf%%%%%%%%%%%%%%%");
ArrayList Jsoin = obj.getvalueofsubmapoflist(resJson,".result[0]",list1);
System.out.println("%%%%%%%%%%%%%%%%%%%%%%%%%%%%%" +Jsoin);
log.info(">>>"+Jsoin);
Make sure you have all referenced .jar files in JMeter classpath (i.e. copy them to JMeter's "lib" folder). Don't forget to restart JMeter to pick the jars up
If you still experience issues try putting your code inside try block like:
try {
//your code here
}
catch (Throwable ex) {
log.error("Problem in Beanshell", ex);
throw ex;
}
This way you'll have informative stacktrace printed to jmeter.log file.
One more way to get more information regarding your Beanshell script is putting debug(); directive to the beginning of your code. If will trigger debugging output into stdout
See How to Use BeanShell: JMeter's Favorite Built-in Component article for more information on using Java and JMeter APIs from Beanshell test elements in JMeter tests.
I am trying to perform BulkLoad into Hbase. The input to map reduce is hdfs file(from Hive).
Using the below code in Tool(Job) class to initiate the bulk loading process
HFileOutputFormat.configureIncrementalLoad(job, new HTable(config, TABLE_NAME));
In Mapper, using the following as output of Mapper
context.write(new ImmutableBytesWritable(Bytes.toBytes(hbaseTable)), put);
Once the mapper is completed. performing the actual bulk loading using,
LoadIncrementalHFiles loadFfiles = new LoadIncrementalHFiles(configuration);
HTable hTable = new HTable(configuration, tableName);
loadFfiles.doBulkLoad(new Path(pathToHFile), hTable);
The job runs fine, but once the Loadincrement start, it hangs on for ever. I have to stop the job from running after many attempts. However after long wait of may be 30 mins, I finally got the above error. After extensive search I found, that Hbase would be trying to access the files(HFiles) which are placed in the output folder, and that folder do not have permission to be written or executed. So throwing the above error. So the alternative solutions are to add file access permissions as below in java code before Bulk Loading is performed.
FileSystem fileSystem = FileSystem.get(config);
fileSystem.setPermission(new Path(outputPath),FsPermission.valueOf("drwxrwxrwx"));
Is this the correct approach, as we move from development to production. Also once I added the above code, I got the similar error for the folder created inside the output folder. This time its the column family folder. This is dynamic action at runtime.
As a temporary workaround, I did as below and was able to move ahead.
fileSystem.setPermission(new Path(outputPath+"/col_fam_folder"),FsPermission.valueOf("drwxrwxrwx"));
Both the steps seems to be workarounds, and I need a correct solution to move to production. Thanks in advance
Try this
System.setProperty("HADOOP_USER_NAME", "hadoop");
Secure bulk load seems to be an appropriate answer. This thread explains a sample implementation. The snippet is copied over as below.
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HRegionInfo;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.client.coprocessor.SecureBulkLoadClient;
import org.apache.hadoop.hbase.security.UserProvider;
import org.apache.hadoop.hbase.security.token.FsDelegationToken;
import org.apache.hadoop.hbase.util.Pair;
import org.apache.hadoop.security.UserGroupInformation;
String keyTab = "pathtokeytabfile";
String tableName = "tb_name";
String pathToHFile = "/tmp/tmpfiles/";
Configuration configuration = new Configuration();
configuration.set("hbase.zookeeper.quorum","ZK_QUORUM");
configuration.set("hbase.zookeeper"+ ".property.clientPort","2181");
configuration.set("hbase.master","MASTER:60000");
configuration.set("hadoop.security.authentication", "Kerberos");
configuration.set("hbase.security.authentication", "kerberos");
//Obtaining kerberos authentication
UserGroupInformation.setConfiguration(configuration);
UserGroupInformation.loginUserFromKeytab("here keytab", path to the key tab);
HBaseAdmin.checkHBaseAvailable(configuration);
System.out.println("HBase is running!");
HBaseConfiguration.addHbaseResources(configuration);
Connection conn = ConnectionFactory.createConnection(configuration);
Table table = conn.getTable(TableName.valueOf(tableName));
HRegionInfo tbInfo = new HRegionInfo(table.getName());
//path to the HFiles that need to be loaded
Path hfofDir = new Path(pathToHFile);
//acquiring user token for authentication
UserProvider up = UserProvider.instantiate(configuration);
FsDelegationToken fsDelegationToken = new FsDelegationToken(up, "name of the key tab user");
fsDelegationToken.acquireDelegationToken(hfofDir.getFileSystem(configuration));
//preparing for the bulk load
SecureBulkLoadClient secureBulkLoadClient = new SecureBulkLoadClient(table);
String bulkToken = secureBulkLoadClient.prepareBulkLoad(table.getName());
System.out.println(bulkToken);
//creating the family list (list of family names and path to the hfile corresponding to the family name)
final List<Pair<byte[], String>> famPaths = new ArrayList<>();
Pair p = new Pair();
//name of the family
p.setFirst("nameofthefamily".getBytes());
//path to the HFile (HFile are organized in folder with the name of the family)
p.setSecond("/tmp/tmpfiles/INTRO/nameofthefilehere");
famPaths.add(p);
//bulk loading ,using the secure bulk load client
secureBulkLoadClient.bulkLoadHFiles(famPaths, fsDelegationToken.getUserToken(), bulkToken, tbInfo.getStartKey());
System.out.println("Bulk Load Completed..");