I am building a Portlet using Vaadin 6. In the portlet I let the end user download the result of the searches/operations he's done.
What I am doing here is generate, on the fly, a zip file for download and serve it for download using
getMainWindow().open(resource);
where resource is a FileResource.
Since the search for is quite complex, I have very little chance to be able to reuse the results and, in order to make things nice,I would like to delete the zip file from the server once it's been "consumed" by the download process.
Is there any chance I can monitor somehow when the download has been completed ?
TIA
If your concern is just keeping the Server clean, it should be enough to use the tmp-dir of your machine. This Way, the OS handles deletion for you.
Or you could write your own clean up process either with cron or sheduler/timer services.
Related
My project: go - 1.12.5; gin-gonic; vue-cli - 3.8.2.
On windows server 2008 go under the local account, run main.exe - works well. But when log off my account, all local account programs are closed, including my go server.
The first thing I did was try to configure IIS for my GO. Nothing good came of it.
Then I tried to run main.exe from the SYSTEM account psexec -s c:\rafd\main.exe. When log off the process does not close. But the frontend is in my account and SYSTEM does not see the local files (js, html, css) of my project
Tell me how to start the Go server, to after log off my project did not stop life
Two ways to approach it.
Go with ISS (or another web server).
Should you pick this option, you have further choices:
Leave your project's code as is, but
Make sure it's able to be told which socket to listen for connections on—so that you can tell it to listen, say, on localhost:8080.
For instance, teach your program to accept a command-line parameter for that—such as -listen or whatever.
Configure IIS in a way so that it reverse-proxies incoming HTTP requests on a certain virtual host and/or path prefix to a running instance of your server. You'll have to make the IIS configuration—the socket it proxies the requests to—and the way IIS starts your program agree with each other.
Rework the code to use FastCGI protocol instead.
This basically amounts to using net/fastcgi instead of net/http.
The upside is that IIS (even its dirt-old versions) support FastCGI out of the box.
The downsides are that FastCGI is beleived to be slightly slower than plain HTTP in Go, and that you'll lose the ability to run your program in the standalone mode.
Turn your program into a proper Windows™ service or "wrap" it with some helper tool to make it a Windows™ service.
The former is cleaner as it allows your program to actually be aware of control requests the Windows Service Management subsystem would send to you. You could also easily turn your program into a shrink-wrapped product, if/when needed. You could start with golang.org/x/sys/windows/svc.
The latter may be a bit easier, but YMMV.
If you'd like to explore this way, look for tools like srvany, nssm, winsv etc.
Note that of these, only srvany is provided by Microsoft® and, AFAIK, it's missing since Win7, W2k8, so your best built-in bet might be messing with sc.exe.
In either case, should you pick this route, you'll have to deal with the question of setting up proper permissions on your app's assets.
This question is reasonably complex in itself since there are many moving parts involved.
For a start, you have to make sure your assets are tried to be accessed not from "the process' current directory"—which may be essentially random when it runs as a service—but either from the place the process was explicitly told about when run (via command-line option or whatever) or figured out somehow using a reasonably engeneered guess (and this is a complicated topic in itself).
Next, you either have to make sure the account your Windows™ uses to run your service really has the permissions to access the place your assets are stored in.
Another possibility is to add a dedicated account and make the SCM use it for running your service.
Note that in either case proper error handling and their reporting is paramount: when your program is being run non-interactively, you want to know when something goes wrong: socket failed to be opened or listened on, assets not found, access was denied when trying to open an asset file, and so on—in all these cases you have to 1) handle the error, and 2) report it in a way you can deal with it.
For a non-interactive Windows™ program the best way may be to use the Event Log (say, via golang.org/x/sys/windows/svc/eventlog).
Simplest solutions would be using windows schedular.
Start your exe file on system logon with highest privilage in background. So whenever your system will logon it will start your exe and make runnign in background.
You can refer this answer,
How do I set a Windows scheduled task to run in the background?
I have a few weeks before the next project & i'm looking/wanting to streamline our development process to give the UX & Devs guys the shortest lead time to change validation (e.g. 10 seconds for a Java change/1 second for UX/JS changes).
Basically, I want what John Lindquist shows in this video (RT feedback with webstorm & the Angular todo list example in 3 minutes) but I with Tomcat & Spring.
Ive been researching/playing this for the last few days with our stack (Tomcat8,Intellij13, Spring4, Angular) and I am just not 'getting it' so think it is my lack of knowledge in this area and i'm missing something (hence the SO question).
What I have achieved to date for UX Guys
Grunt (using node) to serve up the 'static resources'(JS/SCSS/Templates) and livereload to refresh chrome - this is very good and very close to what i want (RT feeback from SCSS/JS/HTML changes) but the big problem is that node is serving the static resources and not TC so with Cross Origin Policies (solved via this and this )- rebuilds in intellij become messy with grunt involved - looked at SCSS compiles with file watchers but it is not gelling) - in short i did not get grunt servicing static & TC the REST API working in harmony. Another option was this guy updates the TC resources with grunt when the file changes but just don't want to go there.
This leads me back to file watchers, jetbrains live edit (what the web storm video shows) and intellij and again, i'm close when it comes to static content as intellij can update the resources on TC on frame deactivation but (and a big BUT) this is NOT real time and when you change resource structure, you need to refresh the page however we are working on a SPA which loses context on refresh which slows the guys down as have to reply sequences to get back to where the change happened and also when using intellij they have to 'frame de-activate' to get the changes pushed to TC (they are on dual monitors so to tab off intellij is the same as pushing a button to deploy the changes )
The best to date is grunt and accept the same origin issues for development but am I missing something for the UX guys?
What I have achieved to date for Dev Guys
Before we start, can't afford jrebel and haven't got Spring Loaded working with intellij and tomcat (yet).
at this stage simply having TC refreshed by intellij with classes change and restart when bean definitions/method structure changes. Bad I know but 'it is what we are use to'
Looking at spring boot - promising but ideally would like not to give the configuration freedom away but it does give live updates on the server I believe.
Grails is out at the moment so can't benefit there.
I know Play allows some real time updates but again, haven't looked at this in detail and a big shift from the current stack.
Summary
On the development side will likely stick to Live Edit and accept the refresh/deactivation issue so we can't 'achieve' what John Lindquist shows in Webstorm, i.e. real time updates when resources changes when using Tomcat/Intellij/Chrome - or at least 'I don't know' how to achieve this?
Server side - i'm still working on this, going to continue to look at spring loaded and intellij integration then look at jrebel and see what budget, if any, we can get but in the meantime is there any alternatives as I see the node/ruby/grails guys getting it all so i believe it must be me and i'm missing the best setup to get super fast feedback from our code changes when using Tomcat & Spring?
In Short, yes it is possible & have achieved what I set out to achieve - that was all developmental changes in a Java EE platform (including JS/SCSS Changes and Spring/Java Changes) to happen in 'real time' (5/10 seconds server, 2 seconds ux). I have recorded a little video showing it all in action (please excuse the lack of dramatics)..
Stack:
AngularJS
Grunt -serving up static pages with an http proxy to /service context calls. The proxy is needed for 2 reasons - 1 is to
get around the Cross origin issues & 2 - so that real time static
resources changes (HTML/JS/SCSS) are shown in Chrome - you can't do this with
tomcat as the resources are copied to the web-app folder in TC and
not being served directly from source (Intellij can redeploy on frame deactivation but that doesn't work well and it doesn't allow for instant changes to be reflected in Chrome)..
Grunt monitors SCSS changes (I believe you could use file watchers in
intellij though but have grunt serving the static content)
Live Edit updates Chrome on the fly.
JRebel for Spring/Server side changes to Tomcat (licence required for
commercial use)
The subtle but important thing is what Grunt is doing..
I think this is a simpler alternative to Ian's solution:
Structure the application in three parts:
REST services (src/main/java) and
Web source files (src/web, don't include them in the final war)
Static files produced by grunt (src/web/dist, and include them in final war at /)
Configure CORS filter (which you need anyway for REST services) [1]
Now the kicker:
Start tomcat (as usual)
Start your angularjs website from src/web (using IntelliJ it's just Debug index.html)
Done -- all your edits in source html/js files are reflected in next browser refresh (or use grunt plugin to make it even more "live")
[1] https://github.com/swagger-api/swagger-core/wiki/CORS
Let me start by saying I understand that heroku's dynos are temporary and unreliable. I only need them to persist for at most 5 minutes, and from what I've read that generally won't be an issue.
I am making a tool that gathers files from websites and zips the up for download. My tool does everything and creates the zip - I'm just stuck at the last part: providing the user with a way to download the file. I've tried direct links to the file location, and http GET requests, and Heroku didn't like either. I really don't want to have to set up AWS just to host a file that only needs to persist for a couple of minutes.. Is there another way to download files stored on /tmp?
As far as I know, you have absolutely no guarantee that a request goes to the same dyno as the previous request.
The best way to do this would probably be to either host the file somewhere else, like S3, or to send it immediately in the same request.
If you're generating the file in a background worker, then it most definitely won't work. Every process runs on a separate dyno.
See How Heroku Works for more information on their backend.
I have a django application in heroku and one thing I need to do sometimes that take a little bit of time is sending emails.
This is a typical use case of using workers. Heroku offers support for workers, but I have to leave them running all the time (or start and stop them manually), which is annoying.
I would like to use a one-off process to send every email. One possibility I first thought of was using IronWorker, since I thought that I could simply add the job to ironworker's queue and it would be exectuted with a mex of 15 min delay, which is ok for me.
The problem is that with ironworker, I need to put in a zip file all the modules and their dependencies in order to run the job, so in my email use case, as I use "EmailMultiAlternatives" from "django.core.mail.message", I would need to include all the django framework in my zip file in order to be able to use it.
According to this link, it's possible to add/remove workers from the app. Is it possible to start one-off processes from the app?
Does anyone has a better solution?
Thanks in advance
Each of our production web servers maintains its own cache for separate web sites (ASP.NET Web Applications). Currently to clear a cache we log into the server and "touch" the web.config file.
Does anyone have an example of a safe/secure way to remotely reset the cache for a specific web application? Ideally we'd be able to say "clear the cache for app X running on all servers" but also "clear the cache for app X running on server Y".
Edits/Clarifications:
I should probably clarify that doing this via the application itself isn't really an option (i.e. some sort of log in to the application, surf to a specific page or handler that would clear the cache). In order to do something like this we'd need to disable/bypass logging and stats tracking code, or mess up our stats.
Yes, the cache expires regularly. What I'd like to do though is setup something so I can expire a specific cache on demand, usually after we change something in the database (we're using SQL 2000). We can do this now but only by logging in to the servers themselves.
For each application, you could write a little cache-dump.aspx script to kill the cache/application data. Copy it to all your applications and write a hub script to manage the calling.
For security, you could add all sorts of authentication-lookups or IP-checking.
Here the way I do the actual app-dumping:
Context.Application.Lock()
Context.Session.Abandon()
Context.Application.RemoveAll()
Context.Application.UnLock()
Found a DevX article regarding a touch utility that look useful.
I'm going to try combining that with either a table in the database (add a record and the touch utility finds it and updates the appropriate web.config file) or a web service (make a call and the touch utility gets called to update the appropriate web.config file)
This may not be "elegant", but you could setup a scheduled task that executes a batch script. The script would essentially "touch" the web.config (or some other file that causes a re-compile) for you.
Otherwise, is your application cache not set to expire after N minutes?