I am trying to automate a workflow that performs the following steps
Enters Document Id
Selects File from Cypress fixtures directory
Clicks on Verify Document button
The script performs all 3 steps, but I receive the following bad request error after performing step 3
"{"message":"Error - Input stream cannot be opened: PDF_E_HEADER: The file header was not found."}"
When I perform the same steps manually the workflow completes as expected. Looks like something in the import function listed below
clickImport(fileName) {
cy.contains("button","Select File").click().wait(1000)
cy.get('input[type="file"]').attachFile(fileName, { mimeType: 'application/pdf' })
cy.wait(1000)
}
It appears that the pdf content is being included in the Request Payload
Related
I'm trying to solve a test data issue in Jmeter. Can anyone of you have a look at below problem statement and advise here please ?
Requirement: Need to send all entries in a CSV file to HTTP request body in 'one' request to the end point.
Example CSV File:
"adsfas123wsf00000wqefqwe52145t10000",
"fdfrgvergq120947r0000dwsfqwaef237sadf",
"wfrqwef7865034r78tkahsefjh6985r7asfdaf",
"qefqwe52145t10000adsfas123wsf00000w",
"wsfqwaef237sadffdfrgvergq120947r0000d"
HTTP Request Body:
["${data}"}]
When the data is substituted, I should be able to get below output.
[
"adsfas123wsf00000wqefqwe52145t10000",
"fdfrgvergq120947r0000dwsfqwaef237sadf",
"wfrqwef7865034r78tkahsefjh6985r7asfdaf",
"qefqwe52145t10000adsfas123wsf00000w",
"wsfqwaef237sadffdfrgvergq120947r0000d"
]
Problem Statement: When I use CSV data set config. file, I'm unable to concatenate all entries into one single request body. My understanding is, CSV data set config. the file is not the right option here.
Did some search in StackOverflow and followed a method to achieve above using JSR223 PreProcessor' and the link is, How to send multiple json body using jmeter?.
Followed the above link and tried added below custom code provided.
def builder = new StringBuilder()
new File('/path/to/plans.csv').readLines().each { line ->
builder.append(new File(line).text).append(System.getProperty('line.separator'))
}
sampler.getArguments().removeAllArguments()
sampler.addNonEncodedArgument('', builder.toString(), '')
sampler.setPostBodyRaw(true)
Upon running, I get below error message,
Caused by: java.io.FileNotFoundException,
"adsfas123wsf00000wqefqwe52145t10000",
"fdfrgvergq120947r0000dwsfqwaef237sadf",
"wfrqwef7865034r78tkahsefjh6985r7asfdaf",
"qefqwe52145t10000adsfas123wsf00000w",
"wsfqwaef237sadffdfrgvergq120947r0000d" (The filename, directory name, or volume label syntax is incorrect)
If the file is not found, then how come the entries are read and displayed in the log viewer?
Also, how do I link the output of custom code to the request body? Or is it taken care of by the custom code itself?
You have an error in your code, change this line:
builder.append(new File(line).text).append(System.getProperty('line.separator'))
to this one:
builder.append(line).append(System.getProperty('line.separator'))
If you want to send the full contents of the file you don't even need to go for scripting, you can use __FileToString() function right in the "Body data" tab of the HTTP Request sampler:
${__FileToString(/path/to/plans.csv,,)}
And last but not the least if you need to generate JSON from the plain text it might be a better idea to go for JsonBuilder class, see Apache Groovy - Why and How You Should Use It and Apache Groovy - Parsing and producing JSON
Two steps:
Add a User Parameter Pre-processor before the HTTP request:
Name: DATA
User_1: ${__FileToString(/path/to/plans.csv,,)}
Add the following to request body:
${DATA}
In iOS I am using AFNetworking for calling the web-services. Here I am getting the error: "unacceptable content type: application/excel". After a lot of struggle I came to know that it is the server team problem. I have a question regarding this:
Before returning the response of service call to the app, notification is being sent and at the same time the notification information(text, time, etc) is logged in notification.xls file on the server where the header is set for the file like below,
function calledWhenServiceCalled(){
someJsonArray; //generate some json array
saveNotification();
return someJsonArray;
}
function saveNotification(){
//some functionality
header('Content-Type: application/excel');
//save the file to .csv
}
If I comment out the 'header('Content-Type: application/excel');' line above then the service call does not return any error and returns proper json response. But if the line is retained then I get unacceptable content type error(application/excel) from the server.
Please let me know why is the content type set for the file header is being sent as unaaceptable content-header for service response? There is no connection between the file created and service response right?? I am not getting what the problem is.. Please help... Let me know If I have not made myself clear...
I am using Fine Uploader's feature "initial file list". After some initial troubles I am now able to display my files and want to handle the deletion of uploaded items.
According to http://docs.fineuploader.com/branch/master/features/session.html my server-side should provide a JSON array with Objects containing at least name and uuid (I am using the Simple mode of Fine Uploader).
As you can see from the log below, the UUID property is handled without problems, I am receiving it on the server side and deleting the file successfully.
The problem comes from the fact that after a successful deletion I want to do something else on the client side and that is why I listed to the deleteComplete event like this:
.on('deleteComplete', function (event, id, xhr, isError) {
if (!isError) {
console.log("reducing the uploaded items");
.... // do something here
}
Now the id parameter is 0 which is a blocker for me because I have to use it for further processing. I guess the 0 comes from the submitted delete request before.
So what I am looking for is a way to somehow tell Fine Uploader what my id is. I guess if I am able to do it correctly when filling the initial file list then it will be correctly propagated to the deleteComplete method.
Output from Fine Uploader when I load the page with an initial file list and delete one of the files afterwards:
...
"[Fine Uploader 5.1.3] Attempting to update thumbnail based on server response."
"[Fine Uploader 5.1.3] Detected valid file button click event on file '7a5a2ebd-f7d3-40a1-b9da-cde5fc9307c6', ID: 0."
"[Fine Uploader 5.1.3] Submitting delete file request for 0"
"[Fine Uploader 5.1.3] Sending DELETE request for 0"
"[Fine Uploader 5.1.3] Delete request for '7a5a2ebd-f7d3-40a1-b9da-cde5fc9307c6' has succeeded."
....
As seen in the comments, it turned out that the id field is something that Fine Uploader manages internally and manipulating this is not possible and not wanted. As all of my items have their own uuid field I was able to use that one for further processing and distinguishing between the deleted files.
In order to retrieve the uuid field for a provided id one can use the get uuid from an id method which goes like this:
var uuid = $('#fine-uploader').fineUploader('getUuid', id);
I am using custom validation directive to validate a field in a form and the validation process includes a AJAX call to verify user information with server API. We want to validate the field as long as user stops typing for a certain period of time.
I have two questions:
1.Why is the function below not working?(with link function in the custom directive) The log message was never shown no matter how many times I type in the binded input field (I am pretty sure I enabled log message display since other logs were shown correrctly)
scope.$watch(attrs.ngModel, function() {
$log.debug("Changed to " + scope.$eval(attrs.ngModel));
});
2.What is the best way to detect that the user has stopped typing for the program to execute the validation process? And is it possible to cancel the validation process before it is finished if user starts to type again?
attrs.ngModel will equal the string value in the html context. What you want to do is bind the ngModel value to the directive's scope:
scope: {
model: '=ngModel'
}
Then watch the directives scope:
scope.$watch("model", function() {
console.log("Changed");
});
example: http://jsfiddle.net/BtrZH/5/
Question: how can I use an output file stored on the OpenCPU server as input to another function?
Background:
I am attempting to use knitr and markdown within openCPU to generate html that I can use to update a webpage with statistical information on the page load.
The basic workflow is as follows:
Generate an .Rmd file, store locally.
Access a webpage that uses AJAX to upload the .Rmd file to an OpenCPU instance on the server.
Use the knit function via openCPU to turn the function into a *.md file stored on the server.
Use the markdownToHTML function on the file stored on the server (by passing in the appropriate hash generated via the call to knit) and receive an AJAX reply that contains the generated HTML.
Update web page with new HTML.
As it stands, I have this process working up to step 4. I can call knit passing in an .Rmd file via a form request POST, and I receive the following reply from OpenCPU:
{
"object" : "xa9eaea44e1",
"graphs" : [
"xf31dcfe7f3"
],
"files" : {
"figure" : "xfc55396fd8",
"test.md" : "x7821c69f79"
}
}
where "test.md" is the output file generated via the knit function. Now, I attempt to use the hash (in this case "x7821c69f79" by POSTing to /R/pub/markdown/markdownToHTML/ascii with the following parameters:
file /R/tmp/x7821c69f79/bin
This returns an HTTP 400 error with the following message:
cannot open URL 'http://localhost/R/store/R/tmp/x7821c69f79/bin/rds'
However, when I make a GET request to /R/tmp/x7821c69f79/bin, the contents of test.md are returned. So I know the file is being stored correctly on the call to knit.
So, what's going on here? In other words, how can I use an output file stored on the OpenCPU server as input to another function?
Hmz the /store error looks like a bug, I'll look into that.
Maybe in step 3 you can have the function return the contents of test.md, e.g. end with return(readLines(test.md))? Or better yet, don't output to test.md but to a tmpfile() and return the contents of that. This way the output is stored as an R object in the store, rather than a raw file, and you can just pass an argument e.g. file=x7821c69f79 in step 4.
Did you have a look at the markdown example app? See source here and here.