Challenges in remotely running big RIA application - ajax

I have a big rich-internet-application file (qooxdoo,js,html). The users use their browser to point to the web server and run it. The problem is that it takes a long time for the users to load the application every time they visit the site.
Is there a way to somehow "bundle" and save the application locally and have the user refer to it locally? So, the url would be like [c:/]/home/myfiles/application/index.html instead of http://site/path-to-app?
I was thinking something like java's jar files to bundle the application and make it runnable locally in browsers, yet the application reaches the external website to get data.
Any ideas?!
Thanks in advance.

The browser should cache all the files so the second load of the app should be quite fast. If thats not the case, maybe you are not using the qooxdoo build version of your application or you disabled the optimizations of the build process.
But there are two ways to get a desktop like application:
You can offer the files you upload to the server as zip and let the user unzip it. If you don't need a web server to run the files, that should work.
If you want to build a real desktop application, you should have a look at titanium [1] which can bring a webapp to the desktop.
[1] http://www.appcelerator.com/products/titanium-desktop/

Running the qooxdoo application from the file system, like Martin sad, should not be a problem. But you have to ensure that "crossDomain" property for example "qx.io.remote.Request" [1] is set to "true", otherwise the same origin policy (SOP) from the Browser blocks the requests to the server.
[1] http://demo.qooxdoo.org/current/apiviewer/#qx.io.remote.Request~crossDomain

Related

Bidirectional Communication between a windows service and WINFORMS app

I am looking for a simple way to send messages between a Winforms Application and a Windows Service. The service will be run under LocalSystem so will be able to install updates to my Winforms App. The app is run in very locked down environments where port will be blocked and the file system is not reliable enough to use it for logging. I have tried using Named Pipes but i could not get this to work. I want to keep it simple so was thinking of trying Memory Mapped Files?
I only want to pass simple strings back and forth between the app and service, e.g.
APP-> Service [Please download this file http... and place it here C:\Program Files...]
Service->APP [0% downloaded]
Service->APP [1% downloaded]
etc..
Service->APP [Update Complete/Failed]
I cant seem to find a good example of how this can be achieved? Is memory mapped files the best way to go? If so, where do i start?! I have been reading through this Post but i cannot seem to make sense of it, its been a long day! I want everything to be in memory, unlike in this example. Can anyone help?

Performance of a java application rendering video files

I have a java/j2ee application deployed in tomcat container on a windows server. The application is a training portal where the training files such as pdf/ppt/flash/mp4 files are read from a share path. When the user clicks a training link, the associated file from the share folder is read downloaded from the share path to the client machine and start running.
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
Is there anything in the application level, we need to configure? or it is a configuration for load in the server? or is it something needs attention from a WAN settings?
I am unable to find out a solution for this issue?
Please post your thoughts.
I'm not 100% sure because there is not so much details, but I'm 90% sure that the application code is not the main problem.
For example:
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
A PDF is basically just a string. Flash is a client-side technology. And I'm pretty sure that you just send a stream to a video player in order to play a MP4. So the server is supposed to just take the file and send it. The server is probably not the source of your problem if we assume that he can handle the number of requests.
So it's about your network: it is too slow for some reasons. And it's difficult to be more specific without more details.
Regards.

What is the best way to setup the API application using Protractor?

I'm setting up my front-end application to use continuous integration in CircleCI. Unit tests work fine, but end-to-end tests are not.
The problem is that it requires the backend (API) server to be running, and ours is in another completely different application. So, what is the best way to setup this backend server (thinking about CI)?
I thought about uploading it on heroku, but then I'd have to keep manually updating the code via git. Another option was to download the code to the CI VM and run the server directly there, but it is just too much work (install ruby, postgres, gems...), and it doesn't seem in no way the best option.
Have anyone passed through the same situation? How do you guys usually deal with this kind of situations?
I ended up doing everything inside the CI. I made some custom scripts that configure the backend project every time the test suite is ran. Also, I cached the folder with the backend code and the gems (which was taking ~2min to install).
The configuring part now adds ~20 seconds to the total time, so it wasn't a big deal. Although I still think that this is probably not the best way to do this, it has some advantages, such as not worrying about updating the backend code (it pulls from master automatically) or its database (it runs rake db:reset after updating the code).
Assuming the API server is running somewhere, configure the front-end application to point there while in the test/CI environment, at least to start out. If there are multiple API environments, choose the one the most closely matches the front-end environment (e.g. dev, staging, etc).
It gets more complicated if/when you need to run the e2e tests each time the API is built or match up specific build versions of the front-end and the API. In that case you will have to run the API server as part of the test.

What is a good framework for deploying a portable HTML/JavaScript Windows application?

I need to deploy an application onto some Windows machines for purposes of data collection from a group of people (i.e. the application will be used to gather responses to a series of survey questions). The process is interactive, alternating between displays of text and images with specific timing requirements. I have put together a prototype application using HTML and JavaScript that implements the survey. However, there are some unique constraints on the deployment environment that have me stuck:
While the machine is Internet-connected, the client requires that the survey application must run fully local to the PC that it runs on. Therefore, sending the survey results to a remote server is not permissible. Obviously, saving to a local file from a Web browser is typically not permitted for security reasons.
Installation of applications onto the machines that will run the survey is not permitted.
The configuration of the machines is not known specifically a priori, but I can assume some recent version of Windows with IE8+.
The "no remote access" requirement was a late comer, and has thrown a wrench into the plan of just writing a simple Web application that could post results to an HTTP server. I'm now looking for the easiest way forward. Two main approaches come to mind:
Use a GUI framework that provides a control that can display HTML/JavaScript; running a full-blown application on the PC would allow me to save the results to the filesystem. I've never done this, but it seems like in this day and age it shouldn't be too difficult. This would allow me to reuse much of my existing prototype implementation, but I would need some way of transferring the results (which would be stored in a JavaScript data structure) outside of the Web control to where the rest of the application could access it.
Reimplement the entire application using some GUI framework (I've used PyQt successfully before, although not on Windows). This approach is obviously less desirable than #1 due to the lack of reuse. However, it may be necessary if #1 isn't feasible.
Any recommendations for the best way to go? Ideally, I'm looking for a solution that can be run in a "portable" manner from a USB thumbdrive or similar.
Have you looked at HTML Applications (HTA)? They work in IE5+ and can use Windows Scripting Host to write to local drives and UNC shares...
Maybe you can use a portable web server with a scripting language on the server side. http://code.google.com/p/mongoose/ Mongoose, for example, you can run PHP, CGI, etc. .. scripts. Then, simply create a script to save a file to your hard drive. And let the rest of the application in the same manner.
Use a script to start the web server, and perhaps a portable web browser like K-Meleon to start the application http://kmeleon.sourceforge.net/ This is highly configurable. Or start the system explorer to your localhost URL.
The only problem may be that the user has to modify the firewall for the first time you run the server?

Downloading large files to PC from OAS Server

We have an Oracle 10g forms application running on a Solaris OAS server, with the forms displaying in IE. Part of the application involves uploading and downloading files (Word docs and PDFs, mainly) from the PC to the OAS server, using Oracle's webutil utility.
The problem is with large files (anything over 25Megs or so), it takes a long time, sometimes many minutes. Uploading seems to work, even with large files. Downloading large files, though, will cause it to error out part way through the download.
I've been testing with a 189Meg file in our development system. Using WEBUTIL_FILE_TRANSFER.Client_To_DB (or Client_To_DB_with_Progress), the download would error out after about 24Megs. I switched to WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and finally got the entire file to download, but it took 22 minutes. Doing without the progress bar got it down to 18 minutes, but that's still too long.
I can display files in the browser, and my test file displayed in about 5 seconds, but many files need to be downloaded for editing and then re-uploaded.
Any thoughts on how to accomplish this uploading and downloading faster? At this point, I'm open to almost any idea, whether it uses webutil or not. Solutions that are at least somewhat native to Oracle are preferred, but I'm opn to suggestions.
Thanks,
AndyDan
This may be totally out to lunch, but since you're looking for any thoughts that might help, here are mine.
First of all, I'm assuming that the actual editing of the files happens outside the browser, and that you're just looking for a better way to get the files back and forth.
In that case, one option I've used in the past is just to route around the web application using Apache, or any other vanilla web server you like. For downloading, create a unique file session token, remember it in the web application, and place a copy of the file, named with the token (e.g. <unique token>.doc), in a download directory visible to Apache. Then provide a link to the file that will be served via Apache.
For upload, you have a couple of options. One is to use the mechanism you've got, then when a file is uploaded, you just have to match on the token in the name to patch the file back into your archive. Alternately, you could create a very simple file upload form separate from your application that will upload the file to a temp directory via Apache, then route the user back into your application and provide the token in the URL HTTP GET-style or else in a cookie.
Before you go to all that trouble, you'll want to make sure that your vanilla web server will provide better upload and download speed and reliability than your current solution, but it should.
As an aside, I don't know whether the application server you're using provides HTTP compression, but if it does, you should make sure it's enabled and working. This is probably the best single thing you can do to increase transfer speed of large files, assuming they're fairly compressible. If your application server doesn't support it, then most any vanilla web server will.
I hope that helps.
I ended up using CLIENT_HOST to call an FTP command to download the files. My 189MB test file took 20-22 minutes to download using WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and only about 20 seconds using FTP. It's not the best solution because it leaves the FTP password exposed on the PC temporarily, but only for as long as the download takes, and even then the user would have to know where to find it.
So, we're implementing this for now, and looking for a more secure but still performant long term solution.

Resources