The following code accesses a helper method which creates and returns an EPPlus ExcelPackage, then returns the package to the browser:
public ActionResult DownloadMatrixExcel(int projectId)
{
try
{
// Get project details
var project = (from p in db.Projects
where p.ProjectId == projectId
select new
{
companyName = p.Company.Name,
projectName = p.Name
}).Single();
// Must append file type to file download responses
var fileName = project.projectName + "-" + project.companyName + "-" + DateTime.Now.ToString("yyyyMMdd", CultureInfo.InvariantCulture) + ".xlsx";
// Configure response
Response.Clear();
Response.BufferOutput = false;
Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.AddHeader("content-disposition", "attachment; filename=" + fileName);
// Create and populate excel package
var matrixSpreadsheet = ExcelHelper.BuildMatrixExcel(projectId);
matrixSpreadsheet.SaveAs(Response.OutputStream);
}
catch (Exception e)
{
return Content("Error: " + e.Message);
}
// Download okay - No ViewResult
return new EmptyResult();
}
Works fine in every browser I have tested but FireFox 18.0.1 (have yet to test other FF versions) trims the file name at the first space, so "someproject - somecompany - thedate" is just "someproject". I can do a Replace and remove spaces but this makes some file names look a bit odd. File extension seems to be intact and no other issues but wondered if anyone could offer an explanation or fix?
You should place the filename between quote characters ("filename").
Okay, found the answer here while researching another issue: File Download issue in FireFox only
Response.AddHeader("Content-Disposition",
string.Format("attachment; filename = \"{0}\"",
System.IO.Path.GetFileName(FileName)));
This will also give the file the correct content type when you choose to save rather than open in browser in FireFox.
Related
(eXist 4.4, XQuery 3.1)
I offer the user the ability to download PDF documents which are dynamically created at the moment of request. The request has two parameters: the document name (ie doc=MS609-0002.pdf) and the document language version (ie lang=EN).
The function that outputs is in download.xql:
declare function download:download($node as node(), $model as map(*), $doc as xs:string, $lang as xs:string)
{
...
return response:stream-binary($pdf,"application/pdf", $filename)
}
It outputs a PDF fine in both a direct call in an IDE and if I call the function through an eXist HTML template, for example:
http://localhost:8081/exist/apps/deheresi/download?doc=MS609-0002.pdf&lang=EN
However, using HTML means opening another browser window.
Instead I'd like to request a REST GET from a button. I've looked at the eXist REST documentation and I can't get it to work.
According to the documentation, I should issue a GET structured as follows :
http://localhost:8081/exist/rest/db/deheresi/download.xql?doc=MS609-0002.pdf&lang=EN
But when make that request, I get :
HTTP ERROR 404
Problem accessing /exist/rest/db/deheresi/download.xql.
Reason: Document /db/deheresi/download.xql not found
This variation with /exist/rest/apps/: http://localhost:8081/exist/rest/apps/deheresi/download.xql?doc=MS609-0002.pdf&lang=EN
Returns the following message with a blank tree:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
And this variation with /exist/db/apps/: http://localhost:8081/exist/db/apps/deheresi/download.xql?doc=MS609-0002.pdf&lang=EN
Returns:
XQueryServlet Error
Error found
Message: Cannot read source file
/Applications/eXist-db.app/Contents/Resources/eXist-db/webapp/db/apps/deheresi/download.xql
I've tested file permissions and there seems to be no problem. Although there may be a REST permission/configuration requirement that I am not aware of? Are there issues with REST on localhost?
EDIT: this is the full function that should process the REST request:
xquery version "3.1";
module namespace get="/db/apps/deheresi/modules/download”;
declare namespace templates="http://exist-db.org/xquery/templates";
declare namespace tei="http://www.tei-c.org/ns/1.0";
declare namespace xsl = "http://www.w3.org/1999/XSL/Transform";
import module namespace xslfo = "http://exist-db.org/xquery/xslfo";
import module namespace document="/db/apps/deheresi/modules/document" at "/db/apps/deheresi/modules/document.xql";
import module namespace document-view="/db/apps/deheresi/modules/document-view" at "/db/apps/deheresi/modules/document-view.xql";
import module namespace document-preprint="/db/apps/deheresi/modules/document-preprint" at "/db/apps/deheresi/modules/document-preprint.xql";
import module namespace document-print="/db/apps/deheresi/modules/document-print" at "/db/apps/deheresi/modules/document-print.xql";
import module namespace functx="http://www.functx.com" at "/db/apps/deheresi/modules/functx.xql";
import module namespace globalvar="/db/apps/deheresi/modules/globalvar" at "/db/apps/deheresi/modules/globalvar.xqm";
declare function download:download($doc as xs:string?, $lang as xs:string?)
{ (: parse $doc to get name of XML to transform, send back pdf with same name :)
let $docset := upper-case(substring-before($doc,"."))
let $filename := concat($docset,".pdf")
let $document := doc(concat($globalvar:URIdata,concat($docset,".xml")))
let $language := if (lower-case($lang) = "fr")
then lower-case($lang)
else "en"
let $filename := concat($docset,".pdf")
(: get XSLT stylesheet :)
let $fostylesheet := document-print:single-doc-fo-stylesheet($language)
(: get XEP FO config:)
let $config := util:expand(doc("/db/apps/deheresi/xep.xml")/*)
(: get xml for transformation in correct language :)
let $xml := document-preprint:single-doc-preprint($document, $language)
(: create FO xml :)
let $fo := util:expand(transform:transform($xml, $fostylesheet, ()))
(: render pdf :)
let $pdf := xslfo:render($fo, "application/pdf", (), $config)
return response:stream-binary($pdf,"application/pdf", $filename)
};
NB: I've put a bounty on this in hopes of receiving an response which walks through the REST input and output function with an example of getting a PDF that is spontaneously generated. This includes any configuration / permission issues that could affect a REST request.
Since you state the PDF is returned when you call this:
http://localhost:8081/exist/apps/deheresi/download?doc=MS609-0002.pdf&lang=EN
Perhaps, what you should be doing is handling that response. A simple example would be this in jQuery using FileSaver.js. (You can google FileSaver.js and download and include that in your pages with jQuery):
function preview_cover(path){
var pdffilename="cover.pdf";
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function(){
if (this.readyState == 4 && this.status == 200){
saveAs(this.response, pdffilename);
}}
xhr.open('GET', 'cover-formatter.xq?cover=' + path + '&page_width=' + page_width + '&page_height=' + page_height);
xhr.setRequestHeader('Authorization','Basic ' + sessinfo);
xhr.responseType = 'blob';
xhr.send();
}
The above example will download the PDF using modern browsers (Chrome, Firefox, Edge).
The code behind is this (I snipped all the other stuff away, just leaving the formatting part):
let $fo := if ($territory = 'WALES') then util:expand(transform:transform($doc, doc("/db/EIDO/data/edit/xsl/EIDOcoverbilingual.xsl"), $parameters))
else util:expand(transform:transform($doc, doc("/db/EIDO/data/edit/xsl/EIDOcover.xsl"), $parameters))
let $pdf := xslfo:render($fo, "application/pdf", (), $config)
let $headers := response:set-header("Content-Disposition", "attachment;filename=document.pdf")
return
response:stream-binary($pdf, "media-type:application/pdf","document.pdf")
Below is a more lengthy jQuery Javascript code that attempts to handle the response at the Javascript side. There are a few tricks to note that I will mention first so as to understand. One hack is that iOS or IE9 browsers cannot handle binary downloads in the browser. So the server-side code actually has a hack to create the PDF and if the browser is iE9 or iOS, it stores the result in the DB (or AWS S3) and returns a link to that PDF so that it can be "clicked" to view. Other common browsers can automatically handle the binary data sent back if done correctly. For this we use FileSaver.js plugin Javascript that will download the PDF.
Other parts you can ignore frankly. Like logEvent which send an Event to Google Analytics, totformats variable tracks users downloads and limits them in any one session. The hack for Chrome downloads is likely not required as that was a bug in Chrome for Android. adding and loading 'loader' classes are for the GUI. The iE9, iOS solution using IP as a variable that is set, this is because the database is replicated and load balanced in many countries and because the data is written to the DB for this one call, we need the IP address of that exact server that has the result in it. This will go away with S3 integration.
Essentially the key is that is calls the same URL you would and saves the response using:
saveAs(this.response, pdffilename);
This is a call into FileSaver.js which handles saving the binary data from an XHR GET and downloading it for you. I have snipped this out from a much larger code which handles all of the downloads including dynamically generated ones from RenderX as yours is, but also static PDFs.
The call is straightforward, just a GET to customer-formatter.xq which is the same in my case as calling http://localhost/customer-formatter.xq (because I strip out /exist and my post for Jetty is 80):
xhr.open('GET', 'customer-formatter.xq?masterlang=' + masterlang + '&doclang=' + doclang + '&specialty='+ specialty + '&article=' + docnum + '&user_name=' + loggedInUser + '&territory=' + territory + '&expiry=' + expiry + '&page_width=' + page_width + '&page_height=' + page_height + '&column_count=' + column_count + '&phrasechange=' + phrasechange + '&genlink=' + genlink + '&access=' + access + '&scalefont=' + scalefont + '&skin=' + skin + '&watermark=' + watermarkmsg +'×tamp=' + timestamp);
totformats++;
if (totformats > maxformats)
window.location.href = '/user?logout=logout';
var docfilename = ((doclang) ? doclang : '') + ((doctype) ? doctype : '');
var pdffilename = docnum + '-' + docfilename + '.pdf';
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function(){
if (this.readyState == 4 && this.status == 200){
// Do IE9 stuff or iPhone/iPad
if (version == 9) {
var ip = this.responseText;
var a = document.createElement("a");
a.style = "cursor: pointer;";
document.body.appendChild(a);
var url = 'http://' + ip + '/IE9/' + loggedInUser + '-' + docnum + '-English.pdf';
a.href = url;
$(a).attr('target','_blank');
a.click();
$(a).remove();
$(doc).removeClass('loader');
$(doc).prop('disabled',false);
}
else if (isiOS) {
var ip = this.responseText.trim();
ioswindow.location.href = 'http://' + ip + '/IE9/' + loggedInUser + '-' + docnum + '-English.pdf';
$(doc).removeClass('loader');
$(doc).prop('disabled',false);
}
// Hack to partially fix Chrome error, file is now in Chrome downloads
else if (Math.max(document.documentElement.clientWidth, window.innerWidth || 0) <= 1024 && window.chrome) {
var blob = new Blob([this.response], {type: 'application/pdf'});
var a = document.createElement("a");
a.style = "display: none";
document.body.appendChild(a);
var url = window.URL.createObjectURL(blob);
a.href = url;
a.download = pdffilename;
a.click();
window.URL.revokeObjectURL(url);
$(doc).removeClass('loader');
$(doc).prop('disabled',false);
}
else {
saveAs(this.response, pdffilename);
$(doc).removeClass('loader');
$(doc).prop('disabled',false);
}
}
}
xhr.open('GET', 'customer-formatter.xq?masterlang=' + masterlang + '&doclang=' + doclang + '&specialty='+ specialty + '&article=' + docnum + '&user_name=' + loggedInUser + '&territory=' + territory + '&expiry=' + expiry + '&page_width=' + page_width + '&page_height=' + page_height + '&column_count=' + column_count + '&phrasechange=' + phrasechange + '&genlink=' + genlink + '&access=' + access + '&scalefont=' + scalefont + '&skin=' + skin + '&watermark=' + watermarkmsg +'×tamp=' + timestamp);
xhr.setRequestHeader('Authorization','Basic ' + sessinfo);
if (isiOS)
xhr.responseType = 'text';
else
xhr.responseType = 'blob';
xhr.send();
logEvent(docnum, doclang, 'format', specialty, source, docname);
Your download:download function is written in such a way that it works with eXist-db templating. I would suggest abstracting the actual download logic into a separate function in a separate library module.
You can then have your download:download function call your abstracted download logic function, and you can also create a new main module like direct-download.xq or whatever which just processes the URL and then calls your abstracted download logic function.
I've tried a number of different options, but no matter what I do it either won't do anything or always return newValue error.
newValue cannot be null.
It seems I'm not the only one but it's had updates since the link below.
docX ReplaceText works incorrect
Below is my original example:-
if (sur.RequestType)
{
templateDoc.ReplaceText("[#1]", "x");
templateDoc.ReplaceText("[#2]", "");
}
else
{
templateDoc.ReplaceText("[#1]", "");
templateDoc.ReplaceText("[#2]", "x");
}
When debugging this it would get to line 4 then jump to line 9 where it would return the newValue cannot be null error on next step.
So I tried:-
string temp1 = "temp1";
if (sur.RequestType)
{
templateDoc.ReplaceText("[#1]", "x");
templateDoc.ReplaceText("[#2]", temp1, false, RegexOptions.IgnoreCase, paraFormat, paraFormat, MatchFormattingOptions.SubsetMatch);
}
else
{
templateDoc.ReplaceText("[#1]", "x.x");
templateDoc.ReplaceText("[#2]", "x", false, RegexOptions.IgnoreCase, paraFormat, paraFormat, MatchFormattingOptions.SubsetMatch);
}
Along with a couple other tweaks but all returning the same error.
Prior to using ReplaceText I'd used the example from the sample project:-
templateDoc.AddCustomProperty( new CustomProperty( "CompanySlogan", "Always with you" ) );
templateDoc.AddCustomProperty( new CustomProperty( "ClientName", "James Doh" ) );
Here it would step through each line but the produced document wouldn't have replaced anything.
Lastly more off topic but if anybody has a better solution, I'd been stuck going back and forth trying to output the file without saving it but had issues converting it from the Xceed DocX type to a HttpResponseMessage.
Below was my least favourable implementation of such as I'd either like to save it to a database or skip saving the file and just provide it directly to the user to save where they want instead of having a server side copy.
[HttpGet]
public HttpResponseMessage DownloadRecord(int id)
{
SURequest sur = _sURequestsService.GetRequestData(id);
var fullPath = System.Web.Hosting.HostingEnvironment.MapPath(#"~/Content/RequestForm.docx");
var fullPath2 = System.Web.Hosting.HostingEnvironment.MapPath(#"~/Content/RequestFormUpdated.docx");
var templateDoc = DocX.Load(fullPath);
var template = CreateRequestFromTemplate(templateDoc, sur);
template.SaveAs(fullPath2);
//using (FileStream fs2 = new FileStream(#"~/Content/RequestFormUpdated.docx", FileMode.Create))
//{
// template.SaveAs(fs2);
//}
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new FileStream(fullPath2, FileMode.Open);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
result.Content.Headers.ContentDisposition.FileName = Path.GetFileName(fullPath2);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
result.Content.Headers.ContentLength = stream.Length;
return result;
//return fs2;
}
I'm stuck with no clue how to proceed further with Xceed so am going to branch my present code and try using OpenXML to see if I have any better luck or if someone else can spot what I'm doing wrong or how to get past the issue in Xceed?
Any help would be much appreciated.
Turned out to be an issue with VS17 which was behaving stranging with replacetext and seemed to have cached an earlier issue in it's compiler.
This behaved like the issue was somewhere it wasn't and could only be resolved by manually stopping the compiler process.
Still no resolution for AddCustomProperty or with skipping generating a local file.
I'm going to work on trying to get it not to generate a local file but likely will need to either open a new question specific to that or setup something else to cleanup old files.
What I'm trying to implement is giving the users the ability to export the grid data to an excel file and download it, with the help of a file save dialog.
Here's how I have coded it right now -
In Javascript -
$.post("/irn/Identifier/Download", { "columnValues": columnValues });
In the Identifier controllers Download action -
public FileResult Download(string columnValues)
{
DTData headlineRows = (DTData)Newtonsoft.Json.JsonConvert.DeserializeObject(columnValues, typeof(DTData));
var e = new Services.DownloadToExcel();
return File(e.WriteData(headlineRows), "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", "testfile.xlsx");
}
In the DownloadToExcel class, inside the WriteData function I have -
//Here, I'm using the EPPlus library to write the column data to an excel file and then i'm returning the data as a byte array -
//Some code that writes the data
return packages.GetAsByteArray();
When I run this code, I expect to see a File Save Dialog in the browser, but nothing happens. There aren't any errors on the C# or JavaScript side. Can anyone tell me what i could be doing wrong?
bit late but I was having a similar issue. To solve it I used JSON.stringify(columnValues) on the client to convert my data into a json string before sending it to the controller.
Then instead of using
$.post("/irn/Identifier/Download", { "columnValues": columnValues });
try
var columnValuesString = JSON.stringify(columnvalues);
window.location = '/irn/Identifier/Download?columnvalues=" + columnValuesString';
Changing the $.post() to a window.location makes it work.
Then you can deserialize the json string in the controller and your Open/Save dialog should appear after hitting your link.
I hope this helps someone else. Let me know and I can post my code if needed.
Thanks.
If you're testing the site in Internet Explorer, try the following:
Open Internet Options -> Advanced. Click Reset. You can also choose to Restore Advanced Settings.
Open Internet Options -> Security. If zones have been changed at all, click "Reset all zones to default level".
Changes to these settings may affect whether or not Internet Explorer accepts file downloads.
More information here: http://answers.microsoft.com/en-us/ie/forum/ie8-windows_other/ie-8-will-not-let-me-download-any-files-music-pdf/bc59ba24-866b-4dbf-93f2-85ebb9912c2c
You should use IFrame to help downlaoding the file.
function postToIframe( url,data) {
var target = "downloadIFrame";
$('<iframe name="' + target + '" style="display:none"/>').appendTo('body');
$('body').append('<form action="' + url + '" method="post" target="' + target + '" id="postToIframe"></form>');
$.each(data, function (n, v) {
$('#postToIframe').append('<textarea name="' + n + '">' + v + '</textarea>');
});
$('#postToIframe').submit().remove();
}
I solved this but forgot to update here -
This worked -
Inside my class -
private const string MimeType = "application/vnd.openxmlformats-
officedocument.spreadsheetml.sheet";
private ExcelPackage package = new ExcelPackage();
private FileContentResult excelFile;
Write data using EPPlus
...
...
...
excelFile = File(package.GetAsByteArray(), MimeType, FileName);
return excelFile;
I've encountered with some strange problem. It's strange for me because I don't understand it and because before everything worked fine. So, my task is to call for controller and pass it file name (with extension) and controller should recognize this file, write into log and then return the file itself if it exists (in Downloads folder). What I'm doing:
public class DownloadController : Controller
{
public ActionResult Files(string id)
{
string filePath = Server.MapPath(Url.Content(string.Format("~/Downloads/{0}", id)));
string serverPath = Url.Content(string.Format("~/Downloads/{0}", id));
string ext = Path.GetExtension(filePath);
if (!System.IO.File.Exists(filePath))
{
//return to the error controller
}
string mem = "text/html";
if (ext == ".zip")
{
mem = "application/x-zip-compressed";
}
else if (ext == ".html" || ext == ".htm")
{
mem = "text/html";
}
else if (ext == ".pdf")
{
mem = "application/pdf";
}
//Save info about downloads into DB
repStat.SaveStatInfo(id, HttpContext.Request.UserHostAddress,
HttpContext.Request.UserHostName, HttpContext.Request.UserAgent);
return File(serverPath, mem, id);
}
}
There is part of the Global.asax:
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.IgnoreRoute("Content/{*pathInfo}");
It doesn't have more "ignores".
So, the problem is when I'm doing the call: mysite.com/download/files/test.pdf the server returns me "Page not found". Of course there is no such file in download/files path! It should call for the controller but not the real file. As soon as I delete the extension like mysite.com/download/files/test the server calls for the controller. I don't understand why it doesn't recognize the file name just as parameter and tries to find the file.
There is absolutely the same behaviour if I try to do with other controllers - as soon as parameter doesn't have any extension it works good else the server looks for the file.
The most strange thing is everything works good locally but doesn't work on the server (worked not tool long but then stopped).
It looks strange to me too. A workaround would be to encode the dot (.) like, for example,
test.pdf -> test++pdf
and then decode it then decode it back.
Probably not the best solution, but will certainly solve the problem.
I use MEF to extend my web application and I use the following folder structure
> bin
> extensions
> Plugin1
> Plugin2
> Plugin3
To achive this automatically, the plugin projects output paths are set to these directories. My application is working with and without azure. My problem is now, that it seems to be inpossible to include the extensions subdirectory automatically to the azure deployment package.
I've tried to set the build dependencies too, without success.
Is there another way?
Well,
I've struggled with the bin folder. The issue (if we may say "issue") is that the packaging process, just packs what is "copy to out directory" set to "copy if newer/aways" only for the Web application (Web Role) project. Having another assemblies in the BIN which are not explicitly referenced by the Web Application will not get deployed.
For my case, where I have pretty "static" references I just pack them in a ZIP, put them in a BLOB container and then use the Azure Bootstrapper to download, extract and put in the BIN folder these references. However, because I don't know the actual location of the BIN folder in a startup task, I use helper wrappers for the bootstrapper to make the trick.
You will need to get the list of local sites, which can be accomplished by something similar to:
public IEnumerable<string> WebSiteDirectories
{
get
{
string roleRootDir = Environment.GetEnvironmentVariable("RdRoleRoot");
string appRootDir = (RoleEnvironment.IsEmulated) ? Path.GetDirectoryName(AppDomain.CurrentDomain.BaseDirectory) : roleRootDir;
XDocument roleModelDoc = XDocument.Load(Path.Combine(roleRootDir, "RoleModel.xml"));
var siteElements = roleModelDoc.Root.Element(_roleModelNs + "Sites").Elements(_roleModelNs + "Site");
return
from siteElement in siteElements
where siteElement.Attribute("name") != null
&& siteElement.Attribute("name").Value == "Web"
&& siteElement.Attribute("physicalDirectory") != null
select Path.Combine(appRootDir, siteElement.Attribute("physicalDirectory").Value);
}
}
Where the _roleModelNs variable is defined as follows:
private readonly XNamespace _roleModelNs = "http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition";
Next you will need something similar to that method:
public void GetRequiredAssemblies(string pathToWebBinfolder)
{
string args = string.Join("",
#"-get https://your_account.blob.core.windows.net/path/to/plugin.zip -lr $lr(temp) -unzip """,
pathToWebBinfolder,
#""" -block");
this._bRunner.RunBootstrapper(args);
}
And the RunBootstrapper has following signature:
public bool RunBootstrapper (string args)
{
bool result = false;
ProcessStartInfo psi = new ProcessStartInfo();
psi.FileName = this._bootstrapperPath;
psi.Arguments = args;
Trace.WriteLine("AS: Calling " + psi.FileName + " " + psi.Arguments + " ...");
psi.CreateNoWindow = true;
psi.ErrorDialog = false;
psi.UseShellExecute = false;
psi.WindowStyle = ProcessWindowStyle.Hidden;
psi.RedirectStandardOutput = true;
psi.RedirectStandardInput = false;
psi.RedirectStandardError = true;
// run elevated
// psi.Verb = "runas";
try
{
// Start the process with the info we specified.
// Call WaitForExit and then the using statement will close.
using (Process exeProcess = Process.Start(psi))
{
exeProcess.PriorityClass = ProcessPriorityClass.High;
string outString = string.Empty;
// use ansynchronous reading for at least one of the streams
// to avoid deadlock
exeProcess.OutputDataReceived += (s, e) =>
{
outString += e.Data;
};
exeProcess.BeginOutputReadLine();
// now read the StandardError stream to the end
// this will cause our main thread to wait for the
// stream to close
string errString = exeProcess.StandardError.ReadToEnd();
Trace.WriteLine("Process out string: " + outString);
Trace.TraceError("Process error string: " + errString);
result = true;
}
}
catch (Exception e)
{
Trace.TraceError("AS: " + e.Message + e.StackTrace);
result = false;
}
return result;
}
Of course, in your case you might want something a bit more complex, where you'll first try to fetch all plugins (if each plugin is in its own ZIP) via code, and then execute the GetRequiredAssemblies multiple times for each plugin. And this code might be executing in the RoleEntryPoint's OnStart method.
And also, if you plan to be more dynamic, you can also override the Run() method of your RoleEntryPoint subclass, and check for new plugins every minute for example.
Hope this helps!
EDIT
And how can you get the plugins deployed. Well, you can either manually upload your plugins, or you can develop a small custom BuildTask to automatically upload your plugin upon build.