Get the random file name - jmeter

With a folder contained around 10 files (like 1.csv, 2.csv....10.csv) And I am uploading them to my http request using a beanshell preprocessor using the following script,
File folder = new File("C:\\User\\SYSTEMTESTING\\SAMPLENEWFILES\\REUPLOADFILES");
File[] fileForUpload = folder.listFiles();
Random rnd = new Random();
vars.put("CURRENT_FILE", fileForUpload[rnd.nextInt(fileForUpload.length)].getAbsolutePath());
want to get the file name which is uploaded using a JSR223 post processer
log.info("File Uploaded Is :"+${CURRENT_FILE});
I am getting,
javax.script.ScriptException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Script8.groovy: 1: Missing ')' # line 1, column 79.
"File Uploaded Is ------->"+C:\Users\SY
In the request body:
PUT data:
--RhvwJL7ZdnIMBIaE0CoKVhsE68UNUiH
Content-Disposition: form-data; name="file"; filename="CSV_10_MB.csv"
Content-Type: application/vnd.ms-excel
Content-Transfer-Encoding: binary
<actual file content, not shown here>
--RhvwJL7ZdnIMBIaE0CoKVhsE68UNUiH--
I want the CSV_10_MB.csv name.

Use vars.get:
log.info("File Uploaded Is :" + vars.get("CURRENT_FILE"));

If you want to use Groovy's GString you need to declare this CURRENT_FILE property, if this is what you're looking for you need to amend your code like:
def CURRENT_FILE = vars.get("CURRENT_FILE")
log.info("File Uploaded Is: ${CURRENT_FILE}")
or use the same vars shorthand for printing the filename to JMeter log like:
log.info("File Uploaded Is: " + vars.get("CURRENT_FILE"))

Related

Getting error in accessing Jmeter variables (variable_MarchNr) in yaml in beanshell script

I am trying to bulk edit one entity called issues, I call GetIssues API and via json extractor, get all issueId's in variable "issueIds"
json extractor to extract issueIds
Now I want to pass these Ids in other api bulk edit issues, If I directly use these array in next API, I get below error:
{"details":"Unexpected token $ in JSON at position 19","metadata":{}}
So I used to below code in Beanshell Post processor:
var mylist;
props.put("mylist", new ArrayList());
log.info("Detected " + vars.get("issueIds_matchNr") + " issueIds");
for (int i=1; i<= Integer.parseInt(vars.get("issueIds_matchNr")); i++) {
log.info("IDs # " + i + ": " + vars.get("issueIds_" + i));
props.get("mylist").add('"' + vars.get("issueIds_" + i) + '"' );
}
log.info(props.get("mylist").toString());
var issueIdList;
vars.put("issueIdList", props.get("mylist").toString());
log.info(vars.get("issueIdList"));
In my next api call if I pass issueIdList variable, then this works fine in jmeter.
sample variable values in debug sampler are like:
issueIdList=["555bcfc2", "33974d2c", "e58db1d6"]
issueIds_1=555bcfc2
issueIds_2=33974d2c
issueIds_3=e58db1d6
issueIds_matchNr=3
Problem I am facing if I convert my jmx2yaml and tried to run this file with
bzt issues.yml
then while executing above shell script, these issueIds_matchNr, issueIds_3 are not detected, I get below error;
2022-05-29 08:26:10,785 INFO o.a.j.e.J.JSR223PostProcessor: Detected null issueIds
2022-05-29 08:26:10,795 ERROR o.a.j.e.JSR223PostProcessor: Problem in JSR223 script, JSR223PostProcessor
javax.script.ScriptException: Sourced file: eval stream : Method Invocation Integer.parseInt : at Line: 4 : in file: eval stream : Integer .parseInt ( vars .get ( "issueIds_matchNr" ) )
Target exception: java.lang.NumberFormatException: null
in eval stream at line number 4
at bsh.engine.BshScriptEngine.evalSource(BshScriptEngine.java:87) ~[bsh-2.0b6.jar:2.0b6 2016-02-05 05:16:19]
My Yaml script is:
- extract-jsonpath:
issueIds:
default: NotFound
jsonpath: $..id
follow-redirects: true
jsr223:
- compile-cache: true
execute: after
language: beanshell
parameters: issueIds
script-file: script.bsh
label: Get Issue Id's
method: GET
url: https://${BASE_URL1}/${PROJECT_ID}/issues?limit=5&sortBy=-displayId&filter%5Bvalid%5D=true
You're missing one important bit: setting the Match Nr in your Taurus YAML
The correct definition of the JSON Extractor would be something like:
extract-jsonpath:
issueIds:
jsonpath: $..id
match-no: -1 #this is what you need to add
Also be informed that starting from JMeter 3.1 it's recommended to use Groovy as the scripting language so consider migrating, it will be as simple as removing the first line from your script.bsh

Google Cloud DLP - CSV inspection

I'm trying to inspect a CSV file and there are no findings being returned (I'm using the EMAIL_ADDRESS info type and the addresses I'm using are coming up with positive hits here: https://cloud.google.com/dlp/demo/#!/). I'm sending the CSV file into inspect_content with a byte_item as follows:
byte_item: {
type: :CSV,
data: File.open('/xxxxx/dlptest.csv', 'r').read
}
In looking at the supported file types, it looks like CSV/TSV files are inspected via Structured Parsing.
For CSV/TSV does that mean one can't just sent in the file, and needs to use the table attribute instead of byte_item as per https://cloud.google.com/dlp/docs/inspecting-structured-text?
What about for XSLX files for example? They're an unspecified file type so I tried with a configuration like so, but it still returned no findings:
byte_item: {
type: :BYTES_TYPE_UNSPECIFIED,
data: File.open('/xxxxx/dlptest.xlsx', 'rb').read
}
I'm able to do inspection and redaction with images and text fine, but having a bit of a problem with other file types. Any ideas/suggestions welcome! Thanks!
Edit: The contents of the CSV in question:
$ cat ~/Downloads/dlptest.csv
dylans#gmail.com,anotehu,steve#example.com
blah blah,anoteuh,
aonteuh,
$ file ~/Downloads/dlptest.csv
~/Downloads/dlptest.csv: ASCII text, with CRLF line terminators
The full request:
parent = "projects/xxxxxxxx/global"
inspect_config = {
info_types: [{name: "EMAIL_ADDRESS"}],
min_likelihood: :POSSIBLE,
limits: { max_findings_per_request: 0 },
include_quote: true
}
request = {
parent: parent,
inspect_config: inspect_config,
item: {
byte_item: {
type: :CSV,
data: File.open('/xxxxx/dlptest.csv', 'r').read
}
}
}
dlp = Google::Cloud::Dlp.dlp_service
response = dlp.inspect_content(request)
The CSV file I was testing with was something I created using Google Sheets and exported as a CSV, however, the file showed locally as a "text/plain; charset=us-ascii". I downloaded a CSV off the internet and it had a mime of "text/csv; charset=utf-8". This is the one that worked. So it looks like my issue was specifically due the file being an incorrect mime type.
xlsx is not yet supported. Coming soon. (Maybe that part of the question should be split out from the CSV debugging issue.)

illegal characters at line 1 and col 1 in xml response in jmeter

I am trying to validate the xml response from an REST API using JMETER. I am using xml schema assertion to validate the response using xsd.
I am getting the following error on XML Schema Assertion
Assertion error: true
Assertion failure: false
Assertion failure message: fatal: line=1 col=1 Content is not allowed in prolog.
When i see the response that i have received i see there are some illegal characters that is added at the beginning of the response xml.
enter image description here
I have tried to modify jmeter.properties file and changed the following values
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.data_type=true
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
Please help me in understanding how to remove the illegal characters in the response and allow the xsd validation to pass through.
These characters indicate Byte Order Mark so you can use BOMInputStream from the JSR223 PostProcessor in order to remove them from the response and replace the response data with "sanitized" XML.
Add JSR223 PostProcessor as a child of the HTTP Request sampler where you want to remove this BOM
Put the following code into "Script" area:
def is = new ByteArrayInputStream(prev.getResponseData())
def bOMInputStream = new org.apache.commons.io.input.BOMInputStream(is)
def bom = bOMInputStream.getBOM()
def charsetName = bom == null ? 'UTF-8' : bom.getCharsetName()
def reader = new InputStreamReader(new BufferedInputStream(bOMInputStream), charsetName)
prev.setResponseData(org.apache.commons.io.IOUtils.toByteArray(reader, 'UTF-8'))
That's it, your assertion shouldn't be failing anymore.
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It

Uploading a file to hdfs through hdfs API results in file getting appended and prepended with signature

My goal is to upload a file, this is what my code looks like:
headers = {
'Some_Auth_Stuff': _get_ca_cert(ROLE),
'Host': host,
}
files = {'upload_file': file}
params = (
('op', 'create'),
('permission', '755')
)
r = requests.put(
'https://proxystuff.hostname.com/fs%s' % hue_path,
headers=headers, files=files, params=params)
if r.status_code == 201:
return True
return False
and I'm uploading this file:
i am a test file
I get a 201 response, which is great but when I look at the file, it looks like so:
--04dc34a8a49d4b83878473d6d78e683d
Content-Disposition: form-data; name="upload_file"; filename="testfile"
i am a test file
--04dc34a8a49d4b83878473d6d78e683d--
Am I missing something when it comes down to uploading content? Any way to disable the file from getting stuff prepended and appended?
EDIT:
If I use this curl command it works fine
curl -c cookie -b cookie -T "test.txt" "https://proxystuff.hostname.com/fs/user/stupidfatcat/test.txt?op=create&permission=755" -H "Some_Auth_Stuff:blahblah" -H "Host:someotherhost_with_hadoop.com:4443"
After trying some stuff out, if I changed it to
headers=headers, data=file.read(), params=params
and I set the Content-Type to plain/text it works fine, seems like it doesn't like file param.

Multiple file Upload - Multipart form data maximum limit

I am trying to upload multiple files with the following request along with stringed JSON object:
Content-Disposition: form-data; name="params"
{"data":{"userName":"jim","description":"test","email":"jim#ox.com"}}
-----------------------------5366762814869373672043632099
Content-Disposition: form-data; name="file0"
VBORw0KGgoAAAANSUhEUgAAAFwAAAA/CAYAAABtj6+sAAAYJ2lDQ1BJQ0MgUHJvZmlsZQAAWIWVeQdUFE2zds
..... (content of file0)
-----------------------------5366762814869373672043632099
Content-Disposition: form-data; name="file1"
cBORw0KGgoAAAANSUhEUgAAAEsAAABpCAYAAAByKt7XAAAYJ2lDQ1BJQ0MgUHJvZmlsZQAAWIWVeQdUFE2zds
..... (content of file1)
-----------------------------5366762814869373672043632099--
file0 is of 16KB and file1 is of 12KB
and My spring controller method looks like this:
#POST
#Path("/addFile")
#Consumes(MediaType.MULTIPART_FORM_DATA)
public void addFilesWithParams(MultipartFormDataInput filesData)
{
//some logic
}
But always ends up with one file saved and second file is ignored.
So i went ahead debuged my code and found out that On REST side I am only getting 2 Parts of the form i.e params and file0 and file1 is totaly ignored.
As you can see in the above snap it says size is 2, even if 3 or 4 parts are being sent from ui.
So my guess is there is some Size cap that I have to override or specify.
Can anyone help me with this issue.

Resources