I have a java/j2ee application deployed in tomcat container on a windows server. The application is a training portal where the training files such as pdf/ppt/flash/mp4 files are read from a share path. When the user clicks a training link, the associated file from the share folder is read downloaded from the share path to the client machine and start running.
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
Is there anything in the application level, we need to configure? or it is a configuration for load in the server? or is it something needs attention from a WAN settings?
I am unable to find out a solution for this issue?
Please post your thoughts.
I'm not 100% sure because there is not so much details, but I'm 90% sure that the application code is not the main problem.
For example:
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
A PDF is basically just a string. Flash is a client-side technology. And I'm pretty sure that you just send a stream to a video player in order to play a MP4. So the server is supposed to just take the file and send it. The server is probably not the source of your problem if we assume that he can handle the number of requests.
So it's about your network: it is too slow for some reasons. And it's difficult to be more specific without more details.
Regards.
can somebody please provide me link to some FTP Client application with complete functionality like Filezilla or others like that..
i am looking for some OpenSource Solution and should be developed in .Net (C#, VB.Net).
i went through many FTP libraries. like NetFtp and many other. but i dont have enough time to develop one from scratch. i need some pre developed and than will modify it according to my requirements.
i want to implement restriction on file upload and no. of files uploaded. (on the base of logged in user).
Thank you.
There are many FTP Servers that have the capability built in to restrict certain files types, set quotas, throttle bandwidth, and limit the files uploaded.
You may want to consider getting a server that already supports this functionality and then you can use any standard FTP client that you want.
I have a large file on window azure and I want to download and save it on my disk. The maximum time for each link on window azure is 60 minutes. If I dowload directly base on link, maybe it isn't enough time. How to download it?
Nathan, your question isn't very clear, but I suspect you are referring to the time allowed for a Shared Access Signature, and being concerned that the client might not download the file within the time allowed?
There are 2 scenarios here:
Once a storage transaction (ie. download file) which uses a SAS
begins, then the transfer will be able to continue past the
expiration of the SAS. It is only new requests which are
authenticated using the SAS and which will fail if they are
attempted past the expiration time on the SAS.
If the client has to resume the download (or is downloading in
blocks), then the client has to be smart enough to detect the failed
authentication after the SAS expires and then re-request a new SAS
from the issuer.
try using a download accelerator like flashgot or something similar ...
One option would be to download the file in pieces and reassemble it once you have the pieces. There are a couple of ways to do that.
If the blob was uploaded in multiple blocks, then you could download each block individually. This is supported directly in the client libraries, so if you can do this it's probably easier. You can also download the blocks in parallel to reduce the total time it takes to download.
You could use HTTP Range headers to get certain byte ranges. I don't believe this is supported in the clients, so you'd probably have to code it yourself. But it will work even if the blob was not uploaded in blocks. I think this could also be done in parallel, but I'm not sure.
I have a transactional database with very large number of records and concurrent access.i need to provide download facility to the clients.the download file size can be even up to 300MB.so if i provide the download facility directly from server memory there will a performance issue.is there any other alternative way to achieve this?
We have an Oracle 10g forms application running on a Solaris OAS server, with the forms displaying in IE. Part of the application involves uploading and downloading files (Word docs and PDFs, mainly) from the PC to the OAS server, using Oracle's webutil utility.
The problem is with large files (anything over 25Megs or so), it takes a long time, sometimes many minutes. Uploading seems to work, even with large files. Downloading large files, though, will cause it to error out part way through the download.
I've been testing with a 189Meg file in our development system. Using WEBUTIL_FILE_TRANSFER.Client_To_DB (or Client_To_DB_with_Progress), the download would error out after about 24Megs. I switched to WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and finally got the entire file to download, but it took 22 minutes. Doing without the progress bar got it down to 18 minutes, but that's still too long.
I can display files in the browser, and my test file displayed in about 5 seconds, but many files need to be downloaded for editing and then re-uploaded.
Any thoughts on how to accomplish this uploading and downloading faster? At this point, I'm open to almost any idea, whether it uses webutil or not. Solutions that are at least somewhat native to Oracle are preferred, but I'm opn to suggestions.
Thanks,
AndyDan
This may be totally out to lunch, but since you're looking for any thoughts that might help, here are mine.
First of all, I'm assuming that the actual editing of the files happens outside the browser, and that you're just looking for a better way to get the files back and forth.
In that case, one option I've used in the past is just to route around the web application using Apache, or any other vanilla web server you like. For downloading, create a unique file session token, remember it in the web application, and place a copy of the file, named with the token (e.g. <unique token>.doc), in a download directory visible to Apache. Then provide a link to the file that will be served via Apache.
For upload, you have a couple of options. One is to use the mechanism you've got, then when a file is uploaded, you just have to match on the token in the name to patch the file back into your archive. Alternately, you could create a very simple file upload form separate from your application that will upload the file to a temp directory via Apache, then route the user back into your application and provide the token in the URL HTTP GET-style or else in a cookie.
Before you go to all that trouble, you'll want to make sure that your vanilla web server will provide better upload and download speed and reliability than your current solution, but it should.
As an aside, I don't know whether the application server you're using provides HTTP compression, but if it does, you should make sure it's enabled and working. This is probably the best single thing you can do to increase transfer speed of large files, assuming they're fairly compressible. If your application server doesn't support it, then most any vanilla web server will.
I hope that helps.
I ended up using CLIENT_HOST to call an FTP command to download the files. My 189MB test file took 20-22 minutes to download using WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and only about 20 seconds using FTP. It's not the best solution because it leaves the FTP password exposed on the PC temporarily, but only for as long as the download takes, and even then the user would have to know where to find it.
So, we're implementing this for now, and looking for a more secure but still performant long term solution.