I have a java/j2ee application deployed in tomcat container on a windows server. The application is a training portal where the training files such as pdf/ppt/flash/mp4 files are read from a share path. When the user clicks a training link, the associated file from the share folder is read downloaded from the share path to the client machine and start running.
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
Is there anything in the application level, we need to configure? or it is a configuration for load in the server? or is it something needs attention from a WAN settings?
I am unable to find out a solution for this issue?
Please post your thoughts.
I'm not 100% sure because there is not so much details, but I'm 90% sure that the application code is not the main problem.
For example:
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
A PDF is basically just a string. Flash is a client-side technology. And I'm pretty sure that you just send a stream to a video player in order to play a MP4. So the server is supposed to just take the file and send it. The server is probably not the source of your problem if we assume that he can handle the number of requests.
So it's about your network: it is too slow for some reasons. And it's difficult to be more specific without more details.
Regards.
Related
I am using JMeter to test load /performance of live streaming severs including WOWZA live streaming engine. But I am unable to test Live streaming of WOWZA cloud, since I am getting a lot of Time out errors. I am very well aware that the timeout is not because of the delay in response because the live streaming is running smoothly when opened from an external network. I found out that, after some period of load request being send to Wowza cloud, the domain name itself is getting changed(its dynamic). I have created the config in jmeter in such a way that all the URL path, Playlist.m3u8, chunklist.m3u8, and corresponding stream(ts) files are dynamic. But, since the domain name itself is getting changed after a period of load test, the request sending are partially getting failed(maybe because the domain name which I am sending request is not responsible to handle all the requests anymore). Can anybody suggest what to do? And is there any way to test load in WOWZA cloud?
You can use this JMeter plugin.
It is a plugin that will do URL extraction automatically from manifest without you needing to use JMeter extractors, as a consequence even when segment’s URLs will change due to Wowza Cloud scaling, this will be taken into account.
Besides, it will accurately similate how player request the server and give metrics on User Experience.
Still as written by the other answer ensure you:
ask for authorization to avoid your test being marked as DDOS
ensure you disable Java DNS caching for JMeter's JVM
Disclaimer: I work for the company that develops it.
As you are testing a multi-tenant cloud environment the first thing you must do is get permission from Wowza. Almost all Cloud applications have restrictions on the use of automation outside of their published interfaces. Your point of contact inside of Wowza will work with you for your testing window, scale, approve your performance test plan, your pacing and think times to ensure that they are reasonable and will not impact their service to other tenants on the system.
They can also provide technical insight on how to construct your tests given some unique features/capabilities/engineering for the site. They may even be able to provide you with sample code.
As a general rule of thumb, you don't point and fire tactical nuclear software at sites you don't own, manage, control or have direct written permission from those that do have those rights.
I am looking for a simple way to send messages between a Winforms Application and a Windows Service. The service will be run under LocalSystem so will be able to install updates to my Winforms App. The app is run in very locked down environments where port will be blocked and the file system is not reliable enough to use it for logging. I have tried using Named Pipes but i could not get this to work. I want to keep it simple so was thinking of trying Memory Mapped Files?
I only want to pass simple strings back and forth between the app and service, e.g.
APP-> Service [Please download this file http... and place it here C:\Program Files...]
Service->APP [0% downloaded]
Service->APP [1% downloaded]
etc..
Service->APP [Update Complete/Failed]
I cant seem to find a good example of how this can be achieved? Is memory mapped files the best way to go? If so, where do i start?! I have been reading through this Post but i cannot seem to make sense of it, its been a long day! I want everything to be in memory, unlike in this example. Can anyone help?
I have read many questions/comments regarding saving the image in DB or file system on server side. However i'm still confused. For now I allow user to upload image (limit to 10MB) and I save the image in the server folder and serve the image via apache context path configuration pointed to that location. However, due to the numbers of image and high load. We want to provide load balancing and fail over functionality. So I have 2 options.
Add code to replicate the uploaded image to all servers or using rsync to do that.
Using CouchDB or MongoDB and save the image as attachment of an document. So I have out of the box replicate functionality.
Can anyone show me the pros/cons of these approach. Can CouchDB/MongoDB have the same read performance compared to file system ?
You can also store files in distributed file system. The benefit over DB supported image server is you do not have to alter the application. Obviously, storing all the data the same way, including images, may be a benefit for you, but changing architecture for already working system may also be problematic.
For example, GlusterFS may be installed on top of "normal" file system to give you distributed features minimizing changes to the system itself. It is supposed to support via its plugins (translators) all the feature you would potentially expect from cloud system: replication, load balancing, stripping of files into relocated parts and fail-over.
Can CouchDB/MongoDB have the same read performance compared to file system ?
No, there will be lag between file system timers and database timers, this is an unfortunately reality.
I have no idea of your current setup, load and performance so I cannot really advise on what to do, however, Apache isn't really a good image server anyway.
Your best bet might be to look into a CDN cache for your images.
I have a big rich-internet-application file (qooxdoo,js,html). The users use their browser to point to the web server and run it. The problem is that it takes a long time for the users to load the application every time they visit the site.
Is there a way to somehow "bundle" and save the application locally and have the user refer to it locally? So, the url would be like [c:/]/home/myfiles/application/index.html instead of http://site/path-to-app?
I was thinking something like java's jar files to bundle the application and make it runnable locally in browsers, yet the application reaches the external website to get data.
Any ideas?!
Thanks in advance.
The browser should cache all the files so the second load of the app should be quite fast. If thats not the case, maybe you are not using the qooxdoo build version of your application or you disabled the optimizations of the build process.
But there are two ways to get a desktop like application:
You can offer the files you upload to the server as zip and let the user unzip it. If you don't need a web server to run the files, that should work.
If you want to build a real desktop application, you should have a look at titanium [1] which can bring a webapp to the desktop.
[1] http://www.appcelerator.com/products/titanium-desktop/
Running the qooxdoo application from the file system, like Martin sad, should not be a problem. But you have to ensure that "crossDomain" property for example "qx.io.remote.Request" [1] is set to "true", otherwise the same origin policy (SOP) from the Browser blocks the requests to the server.
[1] http://demo.qooxdoo.org/current/apiviewer/#qx.io.remote.Request~crossDomain
We have an Oracle 10g forms application running on a Solaris OAS server, with the forms displaying in IE. Part of the application involves uploading and downloading files (Word docs and PDFs, mainly) from the PC to the OAS server, using Oracle's webutil utility.
The problem is with large files (anything over 25Megs or so), it takes a long time, sometimes many minutes. Uploading seems to work, even with large files. Downloading large files, though, will cause it to error out part way through the download.
I've been testing with a 189Meg file in our development system. Using WEBUTIL_FILE_TRANSFER.Client_To_DB (or Client_To_DB_with_Progress), the download would error out after about 24Megs. I switched to WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and finally got the entire file to download, but it took 22 minutes. Doing without the progress bar got it down to 18 minutes, but that's still too long.
I can display files in the browser, and my test file displayed in about 5 seconds, but many files need to be downloaded for editing and then re-uploaded.
Any thoughts on how to accomplish this uploading and downloading faster? At this point, I'm open to almost any idea, whether it uses webutil or not. Solutions that are at least somewhat native to Oracle are preferred, but I'm opn to suggestions.
Thanks,
AndyDan
This may be totally out to lunch, but since you're looking for any thoughts that might help, here are mine.
First of all, I'm assuming that the actual editing of the files happens outside the browser, and that you're just looking for a better way to get the files back and forth.
In that case, one option I've used in the past is just to route around the web application using Apache, or any other vanilla web server you like. For downloading, create a unique file session token, remember it in the web application, and place a copy of the file, named with the token (e.g. <unique token>.doc), in a download directory visible to Apache. Then provide a link to the file that will be served via Apache.
For upload, you have a couple of options. One is to use the mechanism you've got, then when a file is uploaded, you just have to match on the token in the name to patch the file back into your archive. Alternately, you could create a very simple file upload form separate from your application that will upload the file to a temp directory via Apache, then route the user back into your application and provide the token in the URL HTTP GET-style or else in a cookie.
Before you go to all that trouble, you'll want to make sure that your vanilla web server will provide better upload and download speed and reliability than your current solution, but it should.
As an aside, I don't know whether the application server you're using provides HTTP compression, but if it does, you should make sure it's enabled and working. This is probably the best single thing you can do to increase transfer speed of large files, assuming they're fairly compressible. If your application server doesn't support it, then most any vanilla web server will.
I hope that helps.
I ended up using CLIENT_HOST to call an FTP command to download the files. My 189MB test file took 20-22 minutes to download using WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and only about 20 seconds using FTP. It's not the best solution because it leaves the FTP password exposed on the PC temporarily, but only for as long as the download takes, and even then the user would have to know where to find it.
So, we're implementing this for now, and looking for a more secure but still performant long term solution.