I'm developing an app (Windows) that will allow essentially random access to about 1000 1920x1080 images. In effect this is a movie, but using stills and not presented sequentially -- the user can "scrub" to any image very rapidly.
I gather there are three factors to trade off: load time, decode time (if needed) and presentation time. I can specify the hardware, within limits, so SSD and good graphics card can be assumed.
Compressed images (PNG, JPG etc) will load more quickly but have an added decode step. Raw or BMP images will be slower to load but avoid the decode step. Presentation time should be the same in all cases, right, once in the proper form?
Is there an obviously superior approach, codec, library, hardware etc? Can anyone point to a study of the tradeoffs, or offer personal experience as a guide?
Related
I'm creating a program that reads pictures (JPG max size about 10Mb per file) from FlashAir as soon as they're taken, display them in a bigger screen for review and saved them to a local folder. It is paramount to reduce the time from the moment the picture is taken until it is displayed to the user and to prevent loss of quality (they are macro pictures). Now, the camera works with JPG, so changing that is not an option for the moment. All the pictures must be saved locally in the maximum possible quality.
I was wondering what would be the best way to achieve this. Since the FlashAir card is in the camera and moves around, the bottleneck will probably be in the wireless transfer (max speed is 54 Mb/s).
The picture could be displayed withing the Java app or sent to a different app for editing, but I want to reduce I/O operations (I don't want to have to re-read the picture once is saved locally to actually display it).
What is the best way to achieve this using pure Java 8 classes?
My test implementation uses ImageIO.read() and ImageIO.write() methods. The problems I have with this approach is that it takes a long time for the picture to be displayed (it is actually read from the saved folder) and the image is re-encoded and compressed, loosing quality compared to the original file that is in the SD Card.
I feel it should be a way to transfer the bytes very efficiently over the network first and run two parallel processes to save the untouched bytes to disk and decode and display the image (image then could potentially be edited and saved to disk to a different location).
I don't need a fully working example. My main concern is what Java 8 I/O classes are best suited for this job and to know if my approach is the best one to achieve the results.
Edit
After some research I was thinking of using ReadableByteChannel to storage the picture bytes in a ByteBuffer and then pass copies of it to two jobs that will run in parallel: the one saving the bytes would use a FileChannel and the one displaying the image would use them to create an ImageIcon.
I don't know if there is a better/recommended approach.
I am new in web developer. I wanted to know about the performance of video in web. My question is Which parameters decide the performance of video online/watching websites? anybody can tell.
When you're streaming video over a network connection, there are two main reasons why a video might perform poorly: network and computing power. Either the network couldn't retrieve the data in time, or the computer the browser is running on couldn't decode and render it fast enough. The former is much more common.
The major properties of a video that would affect this:
Bitrate:
Expressed in Kbps or Mbps, most people think this is a measurement of quality, but it's not. Rather, bitrate is a measurement of how much data is used to represent a second of video. A larger bitrate means a bigger file for the same runtime, and assuming limited bandwidth, this is the single most important factor in determining how your video will perform.
Codec:
The codec refers to the specific algorithm used to encode and compress moving picture data into bits. The main features affected are file size and video quality, (which in turn affects the bitrate), but some codecs are also more challenging to render than others, leading to poor performance on an older or burdened system even when the network bandwidth isn't an issue. Again, note that a video requiring too much network is much more common than a video requiring too much computer.
For the end user who is watching the video, there are a few factors that are not part of the videos themselves that can impact performance:
The network:
Obviously, a user has to have a certain amount of bandwidth consistently available to stream video at a given quality level, so they won't be able to play much while downloading from a fast server or running Tor, but the server also needs to be able to deliver the bits to everyone who's asking for them. The quality level of the video that can play without stuttering can be drastically reduced by network congestion, disparity in geographical location between the client and the server, denial of service (i.e., things not responding), or any other factor that keeps all the viewers from retrieving bits consistently as the video plays. This is a tough challenge, and there's a whole industry of Content Delivery Networks (CDNs) devoted to the problem of how to deliver a large amount of data can get to a large number of people in many different places on the globe as fast as possible.
Their computer/device:
As codecs have gotten more advanced, they've been able to do better, more complex math to turn pictures into bits. This has made file sizes smaller and quality higher, but it's also made the videos more computationally expensive to decode. Turning bits back into video takes horsepower, and older computers, less powerful devices, and systems that are just doing too much at the moment may be unable to decode video delivered at a certain bitrate.
There are a few other video properties relevant to performance, but mostly these end up affecting the bitrate. Resolution is an example of this: a video encoded at a native resolution of 1600x900 will be harder to stream than a video encoded at 320x240, but since the higher resolution takes up more space (i.e., requires more bits) to store than the lower resolution does for the same length of video, the difference ends up being reflected in the bitrate.
The same is true of file size: it doesn't really matter how big the file is in total; the important number is the bitrate -- the amount of space/bandwidth one second of video takes up.
I think those are the major factors that determine whether a certain video will perform well for a particular user requesting from a specific computer at a given network location.
I am making a webpage which will contain around 20-25 small-resolution (~56x56) and short-length (~3 sec) movies which will be set to autoplay and loop, so they will be looping on the page at all times. They are mostly dispersed throughout the page, so cannot easily be merged into bigger movies.
I'm trying to decide on the right format to use, balancing filesize, quality, and processor overhead.
mp4 seems the best option in terms of quality and filesize, however embedding many small mp4s on the page felt to me slow and made my computer get hot. Despite the fact that if they were one mp4, it would be only around 300x240 -- it seems there is a lot of CPU overhead if they are divided.
gif is lower quality and bigger filesize, but the CPU performance felt smoother. I can't prove it though because I didn't measure it -- are gif's known to be better performance than mp4?
I have not tried other formats (webm, avi, ogg, etc) but I am unsure of how supported all of these formats are by most browsers and I want the webpage to be viewable from multiple browsers/countries.
How can I determine the best format to use for these videos? Is there a tool which can measure the CPU performance of my webpage so I can quantify the performance issues?
Playing many videos on a single page is a problem for most OS's as video decoding and playback is CPU intensive.
Some systems will also have hardware elements (HW acceleration) in the video playback 'pipeline' (the series of functions the OS, browser and player perform to unpack, decode, prepare and display the video) and these elements may not support or have limited capacity for parallel playbacks.
There is a fairly common workaround to this scenario if you know in advance what videos you want to play on the page, and if you don't have too many different video combinations for different users etc: this is to combine the videos on the server side into a single video. This means the users still sees what looks like multiple videos but you are doing all the heavy lifting on the server side.
The drawback is that you can't start or stop individual videos or quickly change the mix of videos.
If you plan to support mobile browsers also you should be aware that most mobile devices do not support Autoplay (to help conserve users bandwidth), and smaller devices such as phones often do not support inline video (the video will always play full screen). [Update Feb 2017: mobile devices are beginning to support autoplay as mobile data rates increase, and most will now support inline, with iOS adding this in iOS 10]
I have an application which generates files automatically and faxes them, and I am trying to improve its speed.
When I send these files through an application called Classic PhoneTools, the duration is 35-40 seconds. The speed is 14,400 bps, and because I choose the option normal quality (not high quality) Classic PhoneTools says the transmission Mode is Normal.
When I send the same files through my application, the duration is always 50-60 seconds. The speed is 14,400 bps, but the transmission Mode is always HQ (High Quality) even if I am not choosing this.
I tried generating all kinds of jpg, pdf and bmp files, large and small, in color, grayscale or black/white, with larger or smaller resolutions, and the speed only improves when I am shrinking the data so that it gets displayed on half of the page or less. I think Classic Phone Tools obtains the same effect by somehow compressing the data obtained from the file before faxing it.
What type of file, with what characteristics, should I generate in order to obtain the optimum faxing speed? Code samples in C# would be excellent.
I have a function that splits a multipage tiff into single pages and it uses the windows BitBlt function. In terms of performance, would the video card have any influence in doing the split? Would it be worth using a straight C/C++ library instead?
The video card won't participate in any activity unless it is the destination HDC of the BitBlt. A library dedicated to imaging functions should perform better for this task, since ultimately you will be writing these to disk.
If you were making alterations to the image data, then there is the possibility that using your video card could help; but only if you are rendering a lot of new image data for the destination tiffs, particularly 3D scenes and the like.
If BitBlt can map the pages into video memory, there is a very good chance that your video card will be much, much faster than the CPU. This is for a few reasons:
The card will run in parallel with your CPU, so you can do other work while it is running.
The video card is optimized to perform the memory copies on it's own, instead of having to have the CPU copy each word from one place to another. This frees your CPU bus up for other things.
The video card probably has a larger word size for data moves, and if you blit has any operation flags attached, those would be likely optimized by the hardware. Also, the memory on most video cards is faster than system memory.
Note that these things aren't always true. For example, if you card shares system memory then it won't have a faster access to the memory than the CPU. However, you still get the parallel support.
Finally, there is the possibility that the overhead of transfering the image to the card and back will overwhelm the speed improvement you get by doing it on the card. So you just need to experiment.
I should add - I believe that you need to specify that the memory is on-card in the device context. I don't think that just creating a memory context does anything particular with the video card.