I am displaying a pdf in browser with inline from API using an aspx page.
While saving the pdf using Chrome/Firefox, takes the filename from header("Content-Disposition", "inline;filename=xyz.pdf")
But while saving the pdf using IE it does not reads the filename from header("Content-Disposition", "inline;filename=xyz.pdf"). instead it takes the aspx name.
Technical details
I have an xyz.aspx page.
The xyz.aspx page will invoke an API for a document.
Then the downloaded document from API will transferred to browser with inline to display the pdf document.
Am setting the response header as below and writing the file bytes.
HttpContext.Current.Response.ClearHeaders();
Response.AddHeader("Content-Disposition", "inline;filename=\"" + Name + "\"");
HttpContext.Current.Response.ContentType = "application/pdf";
Issue:
While saving the opened pdf in IE it takes xyz.aspx instead of the name from header.
Requirement:
While saving the pdf using IE, it need to save with the name of pdf.
I googled so much, as every one tells its IE behavior. I hope some one knows a solution.
Note: I have to display the pdf in browser and then save. Not to download using "attachment"
It is true some versions of IE can't handle ("Content-Disposition", "inline;filename=...")
This is because filename=... was originally intended for the attachment disposition. Not all browser-based PDF viewers can handle it.
The only solution I see is to allow access via a different url.
Suppose you have a route to the pdf like: /pdf/view. If you change it to /pdf/view/filename and you configure your application to handle this route in the same way as /pdf/view your problem is solved.
You can also re-write the download url on the webserver.
Depending on your webserver you have various ways of doing this.
I have also tried with all kind of headers and methods.
In the end, my solution was
private FileResult ReturnStreamAsPdf(string fileName, Stream stream)
{
ContentDisposition cd = new ContentDisposition
{
FileName = fileName,
Inline = true // false = prompt the user for downloading; true = browser to try to show the file inline
};
Response.Headers.Add("Content-Disposition", cd.ToString());
Response.Headers.Add("X-Content-Type-Options", "nosniff");
return new FileStreamResult(stream, MediaTypeNames.Application.Pdf);
}
and the Route Attribute on the method:
[Route("api/getpdfticket/{ticketnumber}")]
public async Task<FileResult> GetPdfTicket(string ticketnumber)
And the href:
href="#($"/api/getpdfticket/{product.TicketNumber}.pdf")"
It seems like Microsloft is still inventing their own standards:
http://test.greenbytes.de/tech/tc2231/#inlwithasciifilenamepdf
PDF Handler : content-disposition filename
Related
I want to save complete mail as PDF.
I found code below in stackoverflow 1. It saves the mailitem body and not the header (such as sender, recipient, subject).
I tried to manipulate the Word.Document to add the header info manually (in the code below I just use minimal changes for testing purposes) but it seems to be readonly. I also thought of "Print as PDF" using the Outlook print functionality, but found no way to get it triggered from my Outlook VSTO solution.
using Word = Microsoft.Office.Interop.Word;
private void SaveMailAsPDF(Outlook.MailItem _mailitem)
{
//source: https://stackoverflow.com/questions/26421252/save-outlook-mailitem-body-as-pdf
Outlook.MailItem mi = _mailitem;
mi.BodyFormat = Outlook.OlBodyFormat.olFormatHTML;
string datetimeReceived = mi.ReceivedTime.ToString("yyyyMMdd-Hmmss");
string fullPath = #"C:\Users\al\Documents\OutlookMailsTest\" + datetimeReceived + "Test.pdf";
Word.Document doc = mi.GetInspector.WordEditor;
//doc.Paragraphs.Add();
//doc.Paragraphs.Add();
//Word.Range rng = doc.Range(0, 0);
//rng.Text = "New Text";
doc.SaveAs2(fullPath, FileFormat: Word.WdSaveFormat.wdFormatPDF);
}
Try to save in the MHTML format (it preserves the embedded pictures and includes the headers) using MailItem.SaveAs, then open the MHTML file using Word object model and save it as a PDF file.
The Outlook object model doesn't provide any print as pdf methods. The MailItem.PrintOut method prints the Outlook item using all default settings.The PrintOut method is the only Outlook method that can be used for printing.
You can use the Word object model for saving the message body in the format you need. For example, you may save the document which represents the message body using the open XML document (*.docx) and then open it for editing using Open XML SDK for adding message headers (if required). See Welcome to the Open XML SDK 2.5 for Office for more information.
You are free to use third-party components to save the document with required customizations using the PDF file format.
Using Ruby Mechanize I have successfully submitted input values to a form and am able to get the resultant page based on the search criteria. The resultant page has pdf files as ahref links that i need to download.
Attribute href has value:
href='xxx.do?FILENAME=path/abc.pdf&SEARCHTEXT=aaa&ID=123_4
where SEARCHTEXT is the text entered as input originally. When i manually click the link pdf opens in a new window having
url as http://someip:8080/xxx/temp/123_4 which is the same ID seen in the href attribute. The actual filename however is different and is of the form xxx.123_2_.doc. My below code returns 0 byte file -
scraper.pluggable_parser.pdf = Mechanize::FileSaver
File.open('n1pdf.pdf', 'wb'){|f| f << scraper.get(alink).body}
where alink=http://someip:8080/xxx/temp/123_4
If i use
File.open("new.pdf", "w") do |f|
uri = URI(alink)
f << Net::HTTP.get(uri)
end
I get HTTP not found error.
I am not sure if i am doing this correct. Is ID a session id that is generated dynamically since all pdf files on the resultant page have this ID with _1/2/3 as filename(or url).
Please note that whenever i manually click and open a pdf file and then hardcore that in my code the file downloads but does not when my code dynamically extracts the ID value and assigns to alink. Not sure if this is related to cookies. Kindly help. Thank You.
Make sure it's the right absolute url:
uri = scraper.page.uri.merge(a[:href])
puts uri # just check to be sure
File.open('n1pdf.pdf', 'wb'){|f| f << scraper.get(uri).body}
I have an image in the data folder: data\img\myimage.jpg. I want to reference it in a content script. More over, I want to alter the DOM of the host page (the page where the content script is injected to) by putting that image there.
I tried to follow what Jeff says here: http://blog.mozilla.org/addons/2012/01/11/sdk-1-4-known-issue-with-hard-coding-resource-uris/ (because I didn't find any other references to that issue), but nothing worked.
What is the URL I need to use in the page in order to reference an image from the add-on's folders?
The data folder is available in main.js and you can pass that url as a content script option.
To preview the url in the console from main.js:
var data = require("sdk/self").data;
console.log(data.url('img/myimage.jpg'));
To pass the url to the content script in main.js:
var image_url = data.url('img/myimage.jpg');
...
contentScriptFile: data.url('myScript.js'),
contentScriptOptions: {"image_url" : image_url}
...
To view the url from the content script:
alert(self.options.image_url);
Like Jeff mentioned in that post, you can use the self module to get a URL to your images in the data directory. To get that information in a content script, you can either pass it in via messages(page-mod example here, but similar to all content scripts) to communicate with the script, or if you're inlining your content script, can just 'bake' it in. The self module won't be available in the content script, but passing in a string is fine.
let url = self.data.url('img/myimage.png');
pageMod({
contentScript: 'var url = ' + url + ';' + 'console.log(url);',
include: '*.mozilla.org'
})
My question is similar to Download and open pdf file using Ajax
But not exactly the same , the reason I want an JQuery ajax is that my file is being generated dynamically from the data which would be fetched from the same page.
So basically I have a Json Object which needs to be sent to the server which would dynamically generate a file and send it back to the browser.
I would not like to use query strings on an anchor with my Json object stringyfied ,
as I think it would be a potential threat as query strings have character restrictions ( am I right here ?).
Please let me know If my workflow is right or I can achieve the same thing thing using a different flow.
You should not use AJAX to download files. This doesn't work. This being said you are left with 2 options:
action link and query string parameters or if the download must be trigerred at some specific javascript event you could also set the window.location.href to the action that is supposed to generate the PDF file and pass query string parameters to it.
or if you are afraid that you have lots of data to pass to the controller action that will generate the PDF to download you use a <form> with method="POST" and inside you use hidden fields to store as much parameters and data as you like:
#using (Html.BeginForm("download", "home"))
{
... hidden fields containing values that need to be sent to the server
<input type="submit" value="Download PDF" />
}
Instead of making one more ajax call in your page you can use anchor tag and php force download to perform pdf download
HTML
Download pdf here
PHP
<?php
$fullPath = $_GET['fileSource'];
if($fullPath) {
$fsize = filesize($fullPath);
$path_parts = pathinfo($fullPath);
$ext = strtolower($path_parts["extension"]);
switch ($ext) {
case "pdf":
header("Content-Disposition: attachment; filename=\"".$path_parts["basename"]."\""); // use 'attachment' to force a download
header("Content-type: application/pdf"); // add here more headers for diff. extensions
break;
default;
header("Content-type: application/octet-stream");
header("Content-Disposition: filename=\"".$path_parts["basename"]."\"");
}
if($fsize) {//checking if file size exist
header("Content-length: $fsize");
}
readfile($fullPath);
exit;
}
?>
I am checking for file size because if you load pdf from CDN cloudfront, you won`t get the size of document which forces the document to download in 0kb, To avoid this i am checking with this condition
if($fsize) {//checking if file size exist
header("Content-length: $fsize");
}
I too was struggling with this problem, above code worked for me I hope it helps
I have a very sure answer. You can't download PDF file while using the ajax request to server. So instead of that you should use html actionlink. for example
#Html.ActionLink("Convert Data into PDF", "PDFView", null, new { #class= "btn btn-info" });
Above code will generate a link button and you can access Action Method of controller. Also you can use any technique to return PDF file from controller for example you can return PDF file using FileResult class.
i am using .ashx to retrive image and i place the the image inside the ajax update panel it retrive the image when a new image is added to the form but when we change the image it is not updating the image it dont even call the .ashx file but when i refresh the browser it works properly
Sounds like a caching issue. Try adding some of the lines found here to your ashx file and it should hopefully force the browser to rerequest the image. (I know that the link is for ASP rather than ASP.NET, but things like Response.Expires = -1 should work)
Alternatively, can you change the path to the image in the updatepanel? If you just add a random parameter on to the end of it the browser will treat it as a fresh request (we use the current date/time as a parameter when we're doing this. The parameter is ignored by ASP.NET unless you explicitly reference it)
Do something like this:
var sPath = "../../handlers/ProcessSignature.ashx?type=View&UserID=" + userID + "&d=" + (((1 + Math.random()) * 0x10000) | 0).toString(16).substring(1);
That puts a 4 character alpha numeric string at the end of your query string. It's not needed, but it will force browsers to pick up the latest version of that image because the URL is different.
I tried the above and some browsers ignore the headers. I threw all of those in and Chrome/FireFox 3 didn't try to update.
IE7 worked sometimes
IE6 just twiddled it's thumbs and asked why it was still in existence.
Changing the path above will fix it in all browsers.