PDF file conversion in JMeter - jmeter

I am using beanshell sampler to convert one pdf content to another pdf .
In beanshell sampler put this following code :
import java.io.BufferedInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
File file = new File("C:\\Users\\hp\\Downloads\\Instructions.pdf");
FileInputStream in = new FileInputStream(file);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
for (int i; (i = in.read(buffer)) != -1; )
{
bos.write(buffer, 0, i);
}
in.close();
byte[] pdfdata= bos.toByteArray();
bos.close();
vars.put("pdfdata",new String(pdfdata));
then use ${pdfdata} variable in beanshell post processor to write the content in another pdf
Beanshell PostProcessor code :-
FileWriter fstream = new FileWriter("newresult1.pdf",true);
BufferedWriter out = new BufferedWriter(fstream);
out.write(vars.get("pdfdata"));
out.close();
fstream.close();
File created but when open that file it's blank. No content is displayed in that file.
So can anyone please tell me how to fix this issue ??

Since JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting.
I fail to see any "conversion" in your code, it looks like you're just trying to make a copy of the file, with Groovy it will be only one line:
new File('newresult1.pdf').bytes = new File('C:\\Users\\hp\\Downloads\\Instructions.pdf').bytes
More information on Groovy scripting in JMeter: Apache Groovy: What Is Groovy Used For?

Related

jmeter - how to make a groovy script easier to maintain for extentreports

Below is a script that helps me build an extentreport for jmeter. It is a JSR223 PostProcessor element. It's working nicely however, the problem is that I have it duplicated after every HTTP Request in the script. I have several scripts with 100's of HTTP requests that would need essentially a copy of the same PostProcessor groovy script. This = hard to maintain!
I have tried splitting common parts into an external groovy script that I tried calling on the JSR223 PostProcessor. I also tried chunking up the bits of the script and putting the values into a csv so that I could just update the csv values if anything changed.
I'm sure there's a cleaner/better way to do this but I'm still learning so I'm not sure of the best way to make this easier to maintain. Here's the JSR223 PostProcessor. The only bit that changes with each http request is the "//test result" section
import com.relevantcodes.extentreports.ExtentReports;
import com.relevantcodes.extentreports.ExtentTest;
import com.relevantcodes.extentreports.LogStatus;
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
//configure object for response data
def response = prev.getResponseDataAsString();
//configure extentreports objects
ExtentReports report;
ExtentTest testLogger;
//set location for file and report config
String resultsPath = "C:/portalQA/Automation/Results/";
String configPath = "C:/portalQA/Automation/JMeter/TestReportConfig/";
String reportPath =
resultsPath+"Login_Results_${reportDate}_${currentTime}_${testenv}.html";
File file = new File(reportPath);
if (!file.exists()) {
//if file does not exist, create it
report = new ExtentReports(reportPath, true);
report.loadConfig( new File(configPath+"extent-config.xml"));
} else {
//else append to existing report
report = new ExtentReports(reportPath, false);
report.loadConfig( new File(configPath+"extent-config.xml"));
}
//test result
testLogger = report.startTest("Authenticate");
testLogger.assignCategory("Initialize Session");
if (response.contains("com.blah.portal.model.User")) {
testLogger.log(LogStatus.PASS, "Logged in with: ${username}");
testLogger.log(LogStatus.INFO, response);
} else {
testLogger.log(LogStatus.FAIL, "Could not authenticate session");
testLogger.log(LogStatus.INFO, response);
}
log.info("Authenticate");
print("Authenticate print");
report.endTest(testLogger);
report.flush();
I see two options:
I suggest using JSR223 Listener instead. First of all, that way you will only have 1 listener in your script, which resolves your original problem, but it is a better option for writing into file in general, since listener has only one instance for all running threads, so you won't be creating a race condition when writing to file.
If you rather have a post-processor, you can put it on higher level (not under any particular sampler) which will cause it to run after each request within the same scope or below.
For example, configuration like
Thread Group
Post-processor
Sampler 1
...
Sampler N
Will cause Post-processor to run after each Sampler 1...Sampler N
In both cases you may need to check which sampler you are processing, and skip those you don't want to add to your report (easiest way to do it, is to come up with some name convention for excluded samplers)
I also faced the same challenge. In my case I need to check if JSON response from REST service was correct. I solved it in the following way.
I've created a JSR223 PreProcessor under the script root. It contains my custom class to handle JSON parsing and asserts.
import groovy.json.JsonSlurper
import org.apache.jmeter.assertions.AssertionResult
class CustomAssert {
def parseResponse(json) {
def jsonSlurper = new JsonSlurper()
return jsonSlurper.parseText(json)
}
def assertResult(assertionResult, expectedResult, actualResult) {
if (!expectedResult.equals(actualResult)) {
assertionResult.setFailure(true);
assertionResult.setFailureMessage("Expected ${expectedResult} but was ${actualResult}");
}
}
}
vars.putObject('customAssert', new CustomAssert())
Note the last line:
vars.putObject('customAssert', new CustomAssert())
I put an instance of my CustomAssert to vars.
Then under my HTTP Requests I've added JSR233 Assertion
def a = vars.getObject('customAssert')
def response = a.parseResponse(prev.getResponseDataAsString())
a.assertResult(AssertionResult, 'DRY', response.sensorResultHolderUIs[0].result.toString())
a.assertResult(AssertionResult, 'DRY', response.sensorResultHolderUIs[1].result.toString())
a.assertResult(AssertionResult, 'DRY', response.sensorResultHolderUIs[2].result.toString())
It basically retrieves the instance of CustomAssert from vars and calls its methods. I can put as many JSR233 Assertions as I want. The only code that is copied is those two lines on top:
def a = vars.getObject('customAssert')
def response = a.parseResponse(prev.getResponseDataAsString())
To sum up:
Take the common part of your code (that doesn't have to be copied).
Wrap it in a class.
Put the class in JSR233 PreProcessor under the root and export its instance via vars
Take the rest of your code and adjust it to use class defined in 2.
Put that code in as many JSR233 Assertions as you want remembering to retrieve the instance created in 3. from vars
Thank you user1053510. Your advice lead me to build my own JSR223 Listener that renders the report. Below is the code in my JSR223 Listener:
import com.aventstack.extentreports.*;
import com.aventstack.extentreports.reporter.*;
import com.aventstack.extentreports.markuputils.*;
ExtentHtmlReporter htmlReporter;
ExtentReports extent;
ExtentTest test;
// create the HtmlReporter
htmlReporter = new ExtentHtmlReporter("C:/AUTO_Results/Results_${testApp}_${reportDate}_${currentTime}_${testenv}.html");
//configure report
htmlReporter.config().setCreateOfflineReport(true);
htmlReporter.config().setChartVisibilityOnOpen(true);
htmlReporter.config().setDocumentTitle("${testApp} Results");
htmlReporter.config().setEncoding("utf-8");
htmlReporter.config().setReportName("${testApp} Results ${reportDate}_${currentTime}_${testenv}");
htmlReporter.setAppendExisting(true);
// create ExtentReports
extent = new ExtentReports();
// attach reporter to ExtentReports
extent.attachReporter(htmlReporter);
extent.setReportUsesManualConfiguration(true);
// Show Env section and set data on dashboard
extent.setSystemInfo("Tool","JMeter");
extent.setSystemInfo("Test Env","${testenv}");
extent.setSystemInfo("Test Date","${reportDate}");
extent.setSystemInfo("Test Time","${currentTime}");
//stringify test info
String threadName = sampler.getThreadName();
String samplerName = sampler.getName();
String requestData = props.get("propRequestData");
String respCode = props.get("propRespCode");
String respMessage = props.get("propRespMessage");
String responseData = props.get("propResponse");
// create test
test = extent.createTest(threadName+" - "+samplerName);
//test.assignCategory("API Testing");
// analyze sampler result
if (vars.get("JMeterThread.last_sample_ok") == "false") {
log.error("FAILED: "+samplerName);
print("FAILED: "+samplerName);
test.fail(MarkupHelper.createLabel("FAILED: "+sampler.getName(),ExtentColor.RED));
} else if (vars.get("JMeterThread.last_sample_ok") == "true") {
if(responseData.contains("#error")) {
log.info("FAILED: "+sampler.getName());
print("FAILED: "+sampler.getName());
test.fail(MarkupHelper.createLabel("FAILED: "+sampler.getName(),ExtentColor.RED));
} else if (responseData.contains("{")) {
log.info("Passed: "+sampler.getName());
print("Passed: "+sampler.getName());
test.pass(MarkupHelper.createLabel("Passed: "+sampler.getName(),ExtentColor.GREEN));
}
} else {
log.error("Something is really wonky");
print("Something is really wonky");
test.fatal("Something is really wonky");
}
//info messages
test.info("RequestData: "+requestData);
test.info("Response Code and Message: "+respCode+" "+respMessage);
test.info("ResponseData: "+responseData);
//playing around
//markupify json into code blocks
//Markup m = MarkupHelper.createCodeBlock(requestData);
//test.info(MarkupHelper.createModal("Modal text"));
//Markup mCard = MarkupHelper.createCard(requestData, ExtentColor.CYAN);
// test.info("Request "+m);
// test.info(mCard);
// test.info("Response Data: "+MarkupHelper.createCodeBlock(props.get("propResponse")));
// test.info("ASSERTION MESSAGE: "+props.get("propAssertion"));
// end the reporting and save the file
extent.flush();
Then in each threadgroup I have a BeanShell Assertion with these lines:
//request data
String requestData = new String(prev.SamplerData);
//String requestData = new String(requestData);
props.put("propRequestData", requestData);
//response data
String respData = new String(prev.ResponseData);
//String respData = new String(prev.getResponseDataAsString());
props.put("propResponse", respData);
//response code
String respCode = new String(prev.ResponseCode);
props.put("propRespCode",respCode);
//response message
String respMessage = new String(prev.ResponseMessage);
props.put("propRespMessage",respMessage);

Getting exception on bean shell assertion for org/json/simple/JSONArray

error :_ jmeter.util.BeanShellInterpreter: Error invoking bsh method: eval org/json/simple/JSONArray
Import java method from project jar file which is placed in lib folder.
source code on written for pass json value in arraylist:
import jsonresponse.common.JsonResponseProcessor;
import assertions.AssertResponse;
import databaseresponse.common.DbResponseProcessors;
import org.json.simple.JSONArray;
String resJson = prev.getResponseDataAsString();
String res = resJson.get("queueId");
log.info("----->>>>"+resJson);
ArrayList list1 = new ArrayList();
list1.add("queueId");
list1.add("name");
list1.add("faxNumber");
list1.add("description");
list1.add("type");
ArrayList list2 = new ArrayList();
list2.add("userId");
ArrayList list3 = new ArrayList();
list3.add("agencyId");
ArrayList list4 = new ArrayList();
list4.add("usertypeId");
JsonResponseProcessor obj = new JsonResponseProcessor();
System.out.println("%%%%%%%%%%%%%%fgfggfffgf%%%%%%%%%%%%%%%");
ArrayList Jsoin = obj.getvalueofsubmapoflist(resJson,".result[0]",list1);
System.out.println("%%%%%%%%%%%%%%%%%%%%%%%%%%%%%" +Jsoin);
log.info(">>>"+Jsoin);
Make sure you have all referenced .jar files in JMeter classpath (i.e. copy them to JMeter's "lib" folder). Don't forget to restart JMeter to pick the jars up
If you still experience issues try putting your code inside try block like:
try {
//your code here
}
catch (Throwable ex) {
log.error("Problem in Beanshell", ex);
throw ex;
}
This way you'll have informative stacktrace printed to jmeter.log file.
One more way to get more information regarding your Beanshell script is putting debug(); directive to the beginning of your code. If will trigger debugging output into stdout
See How to Use BeanShell: JMeter's Favorite Built-in Component article for more information on using Java and JMeter APIs from Beanshell test elements in JMeter tests.

Web API - Setting Response.Content with byte[] / MemoryStream Contents not working properly

My requirement is to use Web API to send across the network, a zip file (consisting a bunch of files in turn) which should not be written anywhere locally (not written anywhere on the server/client disk). For zipping, I am using DotNetZip - Ionic.Zip.dll
Code at Server:
public async Task<IHttpActionResult> GenerateZip(Dictionary<string, StringBuilder> fileList)
{
// fileList is actually a dictionary of “FileName”,”FileContent”
byte[] data;
using (ZipFile zip = new ZipFile())
{
foreach (var item in filelist.ToArray())
{
zip.AddEntry(item.Key, item.Value.ToString());
}
using (MemoryStream ms = new MemoryStream())
{
zip.Save(ms);
data = ms.ToArray();
}
}
var result = new HttpResponseMessage(HttpStatusCode.OK);
MemoryStream streams = new MemoryStream(data);
//, 0, data.Length-1, true, false);
streams.Position = 0;
//Encoding UTFEncode = new UTF8Encoding();
//string res = UTFEncode.GetString(data);
//result.Content = new StringContent(res, Encoding.UTF8, "application/zip");
<result.Content = new StreamContent(streams);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/zip");
//result.Content.Headers.ContentLength = data.Length;
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
result.Content.Headers.ContentDisposition.FileName = "test.zip";
return this.Ok(result);
}
The issue I am facing is that after the zip file downloaded at client end when modified as a test.bin has its stream contents (byte[] data in this example’s contents) missing. (I am getting back a test.zip file. When I change the file locally from test.zip to test.bin, I am seeing that the File’s contents as shown below. It does not contain the Response.Content values. P.S. I have also tried the MIME type “application/octet-stream” as well. No luck!)
Test.zip aka test.bin’s contents:
{"version":{"major":1,"minor":1,"build":-1,"revision":-1,"majorRevision":-1,"minorRevision":-1},
"content":{"headers":[{"key":"Content-Type","value":["application/zip"]},
{"key":"Content-Disposition","value":["attachment; filename=test.zip"]}]},
"statusCode":200,"reasonPhrase":"OK","headers":[],"isSuccessStatusCode":true}
Can someone please help me on how we can set result.Content with a MemoryStream object (I have seen example of “FileStream” at other places on google to set “result.Content” but I want to use MemoryStream object only!). I am highlighting this because I think the problem lies with setting the MemoryStream object to the result.Content (in order to properly save the streams content into the result.Content object)
P.S. I have also gone thru Uploading/Downloading Byte Arrays with AngularJS and ASP.NET Web API (and a bunch of other links) but it did not help me much… :(
Any help is greatly appreciated. Thanks a lot in advance :)
I got my issue solved!!
All I did was to change the Response Type to HttpResponseMessage and use "return result" in the last line rather than Ok(result) { i.e. HttpResponseMessage Type rather than OKNegiotatedContentResult Type)

How to open/stream .zip files through Spark?

I have zip files that I would like to open 'through' Spark. I can open .gzip file no problem because of Hadoops native Codec support, but am unable to do so with .zip files.
Is there an easy way to read a zip file in your Spark code? I've also searched for zip codec implementations to add to the CompressionCodecFactory, but am unsuccessful so far.
There was no solution with python code and I recently had to read zips in pyspark. And, while searching how to do that I came across this question. So, hopefully this'll help others.
import zipfile
import io
def zip_extract(x):
in_memory_data = io.BytesIO(x[1])
file_obj = zipfile.ZipFile(in_memory_data, "r")
files = [i for i in file_obj.namelist()]
return dict(zip(files, [file_obj.open(file).read() for file in files]))
zips = sc.binaryFiles("hdfs:/Testing/*.zip")
files_data = zips.map(zip_extract).collect()
In the above code I returned a dictionary with filename in the zip as a key and the text data in each file as the value. you can change it however you want to suit your purposes.
#user3591785 pointed me in the correct direction, so I marked his answer as correct.
For a bit more detail, I was able to search for ZipFileInputFormat Hadoop, and came across this link: http://cotdp.com/2012/07/hadoop-processing-zip-files-in-mapreduce/
Taking the ZipFileInputFormat and its helper ZipfileRecordReader class, I was able to get Spark to perfectly open and read the zip file.
rdd1 = sc.newAPIHadoopFile("/Users/myname/data/compressed/target_file.ZIP", ZipFileInputFormat.class, Text.class, Text.class, new Job().getConfiguration());
The result was a map with one element. The file name as key, and the content as the value, so I needed to transform this into a JavaPairRdd. I'm sure you could probably replace Text with BytesWritable if you want, and replace the ArrayList with something else, but my goal was to first get something running.
JavaPairRDD<String, String> rdd2 = rdd1.flatMapToPair(new PairFlatMapFunction<Tuple2<Text, Text>, String, String>() {
#Override
public Iterable<Tuple2<String, String>> call(Tuple2<Text, Text> textTextTuple2) throws Exception {
List<Tuple2<String,String>> newList = new ArrayList<Tuple2<String, String>>();
InputStream is = new ByteArrayInputStream(textTextTuple2._2.getBytes());
BufferedReader br = new BufferedReader(new InputStreamReader(is, "UTF-8"));
String line;
while ((line = br.readLine()) != null) {
Tuple2 newTuple = new Tuple2(line.split("\\t")[0],line);
newList.add(newTuple);
}
return newList;
}
});
Please try the code below:
using API sparkContext.newAPIHadoopRDD(
hadoopConf,
InputFormat.class,
ImmutableBytesWritable.class, Result.class)
I've had a similar issue and I've solved with the following code
sparkContext.binaryFiles("/pathToZipFiles/*")
.flatMap { case (zipFilePath, zipContent) =>
val zipInputStream = new ZipInputStream(zipContent.open())
Stream.continually(zipInputStream.getNextEntry)
.takeWhile(_ != null)
.flatMap { zipEntry => ??? }
}
This answer only collects the previous knowledge and I share my experience.
ZipFileInputFormat
I tried following #Tinku and #JeffLL answers, and use imported ZipFileInputFormat together with sc.newAPIHadoopFile API. But this did not work for me. And I do not know how would I put com-cotdp-hadoop lib on my production cluster. I am not responsible for the setup.
ZipInputStream
#Tiago Palma gave a good advice, but he did not finish his answer and I struggled quite some time to actually get the decompressed output.
By the time I was able to do so, I had to prepare all the theoretical aspects, which you can find in my answer: https://stackoverflow.com/a/45958182/1549135
But the missing part of the mentioned answer is reading the ZipEntry:
import java.util.zip.ZipInputStream;
import java.io.BufferedReader;
import java.io.InputStreamReader;
sc.binaryFiles(path, minPartitions)
.flatMap { case (name: String, content: PortableDataStream) =>
val zis = new ZipInputStream(content.open)
Stream.continually(zis.getNextEntry)
.takeWhile(_ != null)
.flatMap { _ =>
val br = new BufferedReader(new InputStreamReader(zis))
Stream.continually(br.readLine()).takeWhile(_ != null)
}}
using API sparkContext.newAPIHadoopRDD(hadoopConf, InputFormat.class, ImmutableBytesWritable.class, Result.class)
File name should be pass using conf
conf=( new Job().getConfiguration())
conf.set(PROPERTY_NAME from your input formatter,"Zip file address")
sparkContext.newAPIHadoopRDD(conf, ZipFileInputFormat.class, Text.class, Text.class)
Please Find PROPERTY_NAME from your input formatter for set path
Try:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
spark.read.text("yourGzFile.gz")

JMeter: Send gzipped json?

I have a web service I need to test with JMeter but the spec requires compressed json. I need to create the json from data in a CSV file. I have been able to work that out using a beanshell preprocessor and a CSV dataset config. But now I need a way to gzip the data and send it to the server. Is there some sampler out there that will do this?
Hackish Solution
The only working solution I've found is to compress the data from the beanshell script then have JMeter send the file, but this seems a bit nasty to me.
import com.eclipsesource.json.*;
import java.io.FileOutputStream;
import java.io.BufferedWriter;
import java.io.OutputStreamWriter;
import java.util.zip.GZIPOutputStream;
jsonObject = new JsonObject();
// populate json data here
GZIPOutputStream zip = new GZIPOutputStream(new FileOutputStream("c:\\json.gz"));
writer = new BufferedWriter(new OutputStreamWriter(zip, "UTF-8"));
jsonObject.writeTo(writer);
writer.close();
If there are no samplers that do the compression, is there a way to avoid writing the data to the file system? Should I make a post processor to delete the temp file? Thanks for your help!
None of the samplers I know will automatically zip for you.
You can use a ByteArrayOutputStream instead of FileOutputStream, then toString it to a jmeter variable or property, so you can use it in the sampler using the variable name.
import com.eclipsesource.json.*;
import java.io.ByteArrayOutputStream;
import java.io.BufferedWriter;
import java.io.OutputStreamWriter;
import java.util.zip.GZIPOutputStream;
jsonObject = new JsonObject();
// populate json data here
GZIPOutputStream zip = new GZIPOutputStream(new ByteArrayOutputStream());
writer = new BufferedWriter(new OutputStreamWriter(zip, "UTF-8"));
jsonObject.writeTo(writer);
writer.close();
vars.put("ZIPFILE", zip.toString());
Then in the Body Data section of your http request, refer to the variable ${ZIPFILE}

Resources