Downloading large files to PC from OAS Server - oracle

We have an Oracle 10g forms application running on a Solaris OAS server, with the forms displaying in IE. Part of the application involves uploading and downloading files (Word docs and PDFs, mainly) from the PC to the OAS server, using Oracle's webutil utility.
The problem is with large files (anything over 25Megs or so), it takes a long time, sometimes many minutes. Uploading seems to work, even with large files. Downloading large files, though, will cause it to error out part way through the download.
I've been testing with a 189Meg file in our development system. Using WEBUTIL_FILE_TRANSFER.Client_To_DB (or Client_To_DB_with_Progress), the download would error out after about 24Megs. I switched to WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and finally got the entire file to download, but it took 22 minutes. Doing without the progress bar got it down to 18 minutes, but that's still too long.
I can display files in the browser, and my test file displayed in about 5 seconds, but many files need to be downloaded for editing and then re-uploaded.
Any thoughts on how to accomplish this uploading and downloading faster? At this point, I'm open to almost any idea, whether it uses webutil or not. Solutions that are at least somewhat native to Oracle are preferred, but I'm opn to suggestions.
Thanks,
AndyDan

This may be totally out to lunch, but since you're looking for any thoughts that might help, here are mine.
First of all, I'm assuming that the actual editing of the files happens outside the browser, and that you're just looking for a better way to get the files back and forth.
In that case, one option I've used in the past is just to route around the web application using Apache, or any other vanilla web server you like. For downloading, create a unique file session token, remember it in the web application, and place a copy of the file, named with the token (e.g. <unique token>.doc), in a download directory visible to Apache. Then provide a link to the file that will be served via Apache.
For upload, you have a couple of options. One is to use the mechanism you've got, then when a file is uploaded, you just have to match on the token in the name to patch the file back into your archive. Alternately, you could create a very simple file upload form separate from your application that will upload the file to a temp directory via Apache, then route the user back into your application and provide the token in the URL HTTP GET-style or else in a cookie.
Before you go to all that trouble, you'll want to make sure that your vanilla web server will provide better upload and download speed and reliability than your current solution, but it should.
As an aside, I don't know whether the application server you're using provides HTTP compression, but if it does, you should make sure it's enabled and working. This is probably the best single thing you can do to increase transfer speed of large files, assuming they're fairly compressible. If your application server doesn't support it, then most any vanilla web server will.
I hope that helps.

I ended up using CLIENT_HOST to call an FTP command to download the files. My 189MB test file took 20-22 minutes to download using WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and only about 20 seconds using FTP. It's not the best solution because it leaves the FTP password exposed on the PC temporarily, but only for as long as the download takes, and even then the user would have to know where to find it.
So, we're implementing this for now, and looking for a more secure but still performant long term solution.

Related

Sync a local folder with a server via REST API calls?

I currently have the following problem and can't decide which way to go:
I have a local directory with subfolders and files and want to mirror and sync that with a remote directory on a server. The problem is that I don't have any direct access to the server itself. The only access point I have is a bunch of REST API calls such as: uploading a file, downloading a file, getting metadata of a file (including creation and change date) and getting a file/directory list
I have already spent some time to look for possible programs/implementations but none of those have really convinced me. Here are some of the possiblities I considered so far:
Use a Powershell or Python script and manually check each file and folder for changes. Schedule a task to call the script every x minutes/hours
Use the Microsoft Sync Framework (MSF) and implement a custom SyncProvider which handles the REST calls and translates it into MSF format. Here I can't really tell if it's feasable at all and how complex it would be
Use tools like Syncthing or similar, but I couldn't find something that supports a remote sync directory only accessible via REST calls, but as there are quite a lot of tool I might have missed some that do
I'm working under Windows 10 so the solution should run on Windows and preferably not require too many addition resources.
Furthermore the solution should be somewhat resilient to errors as the REST API calls seem to have a tendency to fail sometimes (roughly 1 in 10 calls fails)
Any ideas and suggestions are welcome :)

How to share terminal command output in realtime via a web page?

As programming work often requires us to share our terminal output, I am looking for a persistent way to share the output (stdout and stderr) via a web page.
Old school approach was copy/paste to gist or similar service, even piping would work. Still this does not give you colored console and real time.
Another nice to have feature was to be able to use a cloud storage service to store the uploaded content. Still stuff like AWS S3 is not usable because is has no support for streamed upload and download. Mainly an object becomes available only when you finished uploading it, which means the shared command output could not be accessed before the command is finished.
Identified requirements:
persistency, the uploaded content need to be up for at least 30 days
ANSI coloring support, because plain text is hard to read
live output streaming: the content should be accesible even if the command didn't finish yet
opensource client
(optional) opensource server so you could host your own and not rely on service that could go offline without notice or change its TOS.
I did some research before and here are my current findings:
seashells is working but has some serious problems only last 5 executions are kept and all are recycled after 24 hours. The second one is that the server is not open-source and the entire project has one studend behind with no plans to open-source it. This makes it hi
streamhut - 4y old abandonware?
rtail - 4y old abandonware
Notable but not usable:
tmate sharing terminal sessions not command output
gotty same as above
Do you know another approach that would work?

Performance of a java application rendering video files

I have a java/j2ee application deployed in tomcat container on a windows server. The application is a training portal where the training files such as pdf/ppt/flash/mp4 files are read from a share path. When the user clicks a training link, the associated file from the share folder is read downloaded from the share path to the client machine and start running.
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
Is there anything in the application level, we need to configure? or it is a configuration for load in the server? or is it something needs attention from a WAN settings?
I am unable to find out a solution for this issue?
Please post your thoughts.
I'm not 100% sure because there is not so much details, but I'm 90% sure that the application code is not the main problem.
For example:
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
A PDF is basically just a string. Flash is a client-side technology. And I'm pretty sure that you just send a stream to a video player in order to play a MP4. So the server is supposed to just take the file and send it. The server is probably not the source of your problem if we assume that he can handle the number of requests.
So it's about your network: it is too slow for some reasons. And it's difficult to be more specific without more details.
Regards.

downloading large amount of files

I'm researching solutions for a potential client. They're requesting the ability to download a large amount of MP3's (1000+) from their online catalog.
I've researched/tested building a zip containing all MP3s using ZipArchive but ran into obvious memory leak issues that have ruled that solution out.
I'm now trying to think out of the box.
One idea was to create an FTP queue or a Torrent type download link for them. Is there anything out there that can pull something like this off?
Any help or suggested direction would be greatly appreciated! Thanks!!
Edit: Here is the overall process/goal that we're trying to achieve.
The client creates music for TV/Flim placement. They maintain a online catalog AND a local copy they send to potential buyers. The online catalog and the offline catalog need to mirror each other. Problem being, they have multiple offices that will have to update their local copy with the new files added to the online catalog from many different locations
Example: East Coast User updates catalog with 100 new files. West Coast User needs to update the offline catalog with the new files retrieved from the online catalog.
We had hoped to create custom zip's of the files each user needed to update their catalog based on the user's download history that we'd maintain in MySQL. We were testing ZipArchive but we couldn't seem to build Zips over 175 MEG (give or take). We're in the process of testing ZipStreaming but are having some issues.
I hope this clears up the overall goal and problems we are facing.
GNU wget?
It can download recursive. Just give wget a list of all files on the server, e.G.
http://www.example.org/filelist.html which contains links like file1.mp3, file2.mp3 etc (apache normally generates such an index file automatically wenn a directory without index.html/php in it gets called.
http://linux.die.net/man/1/wget
Frankly speaking, I can't identify the actual problem/question from your post. If you are looking for minimizing network load, then you need to remember that MP3 files are not compressed well because they are already compressed (not as well as possible, but well). If you are looking for a transport, than any file transfer protocol will do (FTP, SFTP, HTTP, WebDAV).
If you need flexibility and features, I'd recommend SFTP: this is a protocol for remote file system access, so besides "get file" operation it has plenty of useful operations including machine-readable directory listing (not always available in FTP and not available in standard HTTP), built-in ZLib compression, built-in possibility to resume file transfers and more bonuses. HTTP also has ZLib compression, but this one is not always available.
Update: your approach doesn't care about what is really available on the client and you are going to prepare ZIP files based on your (possibly incorrect) knowledge of the client already has.
If the client and server are both applications that you develop, then you should use RSync protocol or something similar to update data online (not using any ZIP files) and download the files that are missing on the client. If direct communication between the client and the server is not possible, you can make the client send his state to the server and the server will prepare an individual package after that. As for ZIP functionality - it's needed only when you use batch update (no real-time communication between the client and the server). I don't know what technology you are using but if your only problem is with ZIP component, you can use something else for data packing - either different ZIP component (for .NET and VCL we have ZIP component) or some other packing solution (for example, our SolFS product doesn't have size limits). Unfortunately I am not aware of RSync-like implementation available as a component.

Upload large files using Ruby

I'm wondering what is the best pattern to allow large files to be uploaded to a server using Ruby.
I've found Rails and Large, Large file Uploads: Looking at the alternative but it doesn't give any concrete solutions.
I don't want to use Rails since I'm working on a simple upload server that'll run in standalone mode. I'm guessing that Sinatra could be the key but I don't know which web server I should use to run it without raising a Timeout.
I also need this web server to allow simultaneous upload.
UPDATE: By "large files" I mean between 200MB and 5GB.
UPDATE2: Since those files are videos (in my case), I can deal with a max size of 2GB like youtube.
ok i am taking a bit of a strech here but:
if you would use a couchdb as a target for your uploads you would get rid of the timeout problem.
consider the couchdb as some "temp" memory in this example.
so if a downloads finishes you can take the file from the couchdb and do with it whatever you want.
i managed to upload files as big as 9gb over a dsl line into couchdb without any drama.
it may take a bit of reading but i think you could make it work.
couchdb has many rails gems so it plays nice with others ;)
let me know if you wanna go down that rabbit hole so i can give you some more pointers
passenger recommends using a separate apache/nginx module to handle uploads.

Resources