How to browse through a very very long video quickly - ffmpeg

I have several month of h.265 video from a security cam. I like to look through quickly.
Problem: All videoplayers are only able to speed up up to 100x. Also it's 4k. So speedup is also limited by CPU power. I need 1000x speedup or more. So I need a player which is capable looking only on keyframes (to speed up processing) and also process only every x keyframe to make it really fast. This should work in a player. I don't want to reencode.
Any Ideas
cheers
Timo

Related

Which parameters decide the performance of video online/watching websites?

I am new in web developer. I wanted to know about the performance of video in web. My question is Which parameters decide the performance of video online/watching websites? anybody can tell.
When you're streaming video over a network connection, there are two main reasons why a video might perform poorly: network and computing power. Either the network couldn't retrieve the data in time, or the computer the browser is running on couldn't decode and render it fast enough. The former is much more common.
The major properties of a video that would affect this:
Bitrate:
Expressed in Kbps or Mbps, most people think this is a measurement of quality, but it's not. Rather, bitrate is a measurement of how much data is used to represent a second of video. A larger bitrate means a bigger file for the same runtime, and assuming limited bandwidth, this is the single most important factor in determining how your video will perform.
Codec:
The codec refers to the specific algorithm used to encode and compress moving picture data into bits. The main features affected are file size and video quality, (which in turn affects the bitrate), but some codecs are also more challenging to render than others, leading to poor performance on an older or burdened system even when the network bandwidth isn't an issue. Again, note that a video requiring too much network is much more common than a video requiring too much computer.
For the end user who is watching the video, there are a few factors that are not part of the videos themselves that can impact performance:
The network:
Obviously, a user has to have a certain amount of bandwidth consistently available to stream video at a given quality level, so they won't be able to play much while downloading from a fast server or running Tor, but the server also needs to be able to deliver the bits to everyone who's asking for them. The quality level of the video that can play without stuttering can be drastically reduced by network congestion, disparity in geographical location between the client and the server, denial of service (i.e., things not responding), or any other factor that keeps all the viewers from retrieving bits consistently as the video plays. This is a tough challenge, and there's a whole industry of Content Delivery Networks (CDNs) devoted to the problem of how to deliver a large amount of data can get to a large number of people in many different places on the globe as fast as possible.
Their computer/device:
As codecs have gotten more advanced, they've been able to do better, more complex math to turn pictures into bits. This has made file sizes smaller and quality higher, but it's also made the videos more computationally expensive to decode. Turning bits back into video takes horsepower, and older computers, less powerful devices, and systems that are just doing too much at the moment may be unable to decode video delivered at a certain bitrate.
There are a few other video properties relevant to performance, but mostly these end up affecting the bitrate. Resolution is an example of this: a video encoded at a native resolution of 1600x900 will be harder to stream than a video encoded at 320x240, but since the higher resolution takes up more space (i.e., requires more bits) to store than the lower resolution does for the same length of video, the difference ends up being reflected in the bitrate.
The same is true of file size: it doesn't really matter how big the file is in total; the important number is the bitrate -- the amount of space/bandwidth one second of video takes up.
I think those are the major factors that determine whether a certain video will perform well for a particular user requesting from a specific computer at a given network location.

Embedding many small movies in page (gif vs mp4 vs webm vs ?)

I am making a webpage which will contain around 20-25 small-resolution (~56x56) and short-length (~3 sec) movies which will be set to autoplay and loop, so they will be looping on the page at all times. They are mostly dispersed throughout the page, so cannot easily be merged into bigger movies.
I'm trying to decide on the right format to use, balancing filesize, quality, and processor overhead.
mp4 seems the best option in terms of quality and filesize, however embedding many small mp4s on the page felt to me slow and made my computer get hot. Despite the fact that if they were one mp4, it would be only around 300x240 -- it seems there is a lot of CPU overhead if they are divided.
gif is lower quality and bigger filesize, but the CPU performance felt smoother. I can't prove it though because I didn't measure it -- are gif's known to be better performance than mp4?
I have not tried other formats (webm, avi, ogg, etc) but I am unsure of how supported all of these formats are by most browsers and I want the webpage to be viewable from multiple browsers/countries.
How can I determine the best format to use for these videos? Is there a tool which can measure the CPU performance of my webpage so I can quantify the performance issues?
Playing many videos on a single page is a problem for most OS's as video decoding and playback is CPU intensive.
Some systems will also have hardware elements (HW acceleration) in the video playback 'pipeline' (the series of functions the OS, browser and player perform to unpack, decode, prepare and display the video) and these elements may not support or have limited capacity for parallel playbacks.
There is a fairly common workaround to this scenario if you know in advance what videos you want to play on the page, and if you don't have too many different video combinations for different users etc: this is to combine the videos on the server side into a single video. This means the users still sees what looks like multiple videos but you are doing all the heavy lifting on the server side.
The drawback is that you can't start or stop individual videos or quickly change the mix of videos.
If you plan to support mobile browsers also you should be aware that most mobile devices do not support Autoplay (to help conserve users bandwidth), and smaller devices such as phones often do not support inline video (the video will always play full screen). [Update Feb 2017: mobile devices are beginning to support autoplay as mobile data rates increase, and most will now support inline, with iOS adding this in iOS 10]

Saving highest quality video from video-capture card

I have a machine with 2x3 3ghz dual-core xeon and 4x10krpm scsi 320 disks in raid0.
The capture card is an osprey 560 64 bit pci card.
Operating system is currently Windows Server 2003.
The video-stream that I can open with VLC using direct show is rather nice quality.
However, trying to save this video-stream without loss of quality has proven quite difficult,
using the h264 codec I am able to achieve a satisfying quality, however, all 4 cores jump to 100% load after a few second and then it start dropping frames, the machine is not powerful enough for realtime encoding. I've not been able to achieve satisfying mpeg1 or 4 quality, no matter which bitrate I set..
Thing is, the disks in this machine are pretty fast even by todays standard, and they are bored.. I don't care about disk-usage, I want quality.
I have searched in vain for a way to pump that beautiful videostream that I see in VLC onto the disk for later encoding, I reckon the disks would be fast enough, or maybe something which would apply a light compression, enough that the disks can keep up, but not so much as to loose visible quality.
I have tried FFMPEG as it seems capable of streaming a yuv4 stream down to the disk, but ofcause FFMPEG is unable to open the dshow device ( same error as this guy Ffmpeg streaming from capturing device Osprey 450e fails )
Please recommend a capable and (preferably) software which can do this.
I finally found it out, it is deceptively simple:
Just uncheck the "transcode" option in the window where you can select an encoding preset.
2 gigabytes per minutes is a low price to pay for finally getting those precious memories off of old videotapes in the quality they deserve.

Image compression - Speed and Latency

Which is the best way to reduce size fast and decompress it fast?
I will be using it for, real time video (sharing desktop).
And i don´t think a video compression will do any good there.
Speed and Latency is everything here, i am currently looking at LZ4 for compressing a bmp, as it´s pretty fast, but it starts to be quite the bottleneck at higher resolutions (1280x720), and i only mean in compression and decoding speed, not in CPU usage.
I prefer lossless, but if there is a lossy (transparent?) option i am willing to try it out of course.
EDIT:
I am looking at libjpeg-turbo, but i have no idea how to use it in c#, can´t seem to find any information on it.
Have you tried Red5 Media server? It's an open source software to stream videos.

Smooth play of live network audio samples

I am writing a client/server app in that server send live audio data that capture audio samples that captured from some external device( mic. for example ) and send it to the client. Then client want to play those samples. My app will run on local network so I have no problem with bandwidth( My sound is 8k, 8bit stereo while my net card 1000Mb ). In client I buffer the data for a small time and then start playback. and as data arrive from server I send them to sound card. This seems to work fine but there is a problem:
when my buffer in the client side finished, I will experience gaps in played sound.
I consider this is because of the difference in sampling time of the server and the client, it means that 8K on server is not same as 8K on client.
I can solve this with pausing client's playback and buffer again, but my boss doesn't accept it, since I have proper bandwidth and I should be able to play sound with no gap or pause.
So I decided to dynamically change playback speed in the client but I don't know how.
I am programming in Windows( native ) and I currently use waveOutXXX to play the sound. I can use any other native library( DirectX/DirectSound, Jack or ... ) but they should provide a smooth playback in the client.
I have programmed with waveOutXXX many times without any problem and I know it good but I can't solve my problem of dynamic resampling
I would suggest that your problem isn't likely due to mis-matched sample rates, but something to do with your buffering. You should be continuously dumping data to the sound card, and continuously filling your buffer. Use a reasonable buffer size... 300ms should be enough for most applications.
Now, over long periods of time, it is possible for the clock on the recording side and the clock on the playback side to drift apart enough that the 300ms buffer is no longer sufficient. I would suggest that rather than resampling at such a small difference, which could introduce artifacts, simply add samples at the encoding end. You still record at 8kHz, but you might add a sample or two every second, to make that 8.001kHz or so. Simply doubling one of the existing samples for this (or even a simple average between one sample and the next) will not be audible. Adjust this as necessary for your application.
I had a similar problem in an application I worked on. It did not involve network, but it did involve source data being captured in real-time at a certain fixed sampling rate, a large amount of signal processing, and finally output to the sound card at a fixed rate. Like you, I had gaps in the playback at buffer boundaries.
It seemed to me like the problem was that the processing being done caused audio data to make it to the sound card in a very jerky manner. That is, it would get a large chunk, then it would be a long time before it got another chunk. The overall throughput was correct, but this latency caused the sound card to often be starved for data. I suppose you may have the same situation with the network piece in your system.
The way I solved it was to first make the audio buffer longer. Then, every time a new chunk of audio was received, I checked how full the buffer was. If it was less than 20% full, I would write some silence to make it around 60% full.
You may think that this goes against reducing the gaps in playback since it is actually adding a gap, but it actually helps. The problem that I was having was that even though I had a significantly large audio buffer, I was always right at the verge of it being empty. With the other latencies in the system, this resulted in playback gaps on almost every buffer.
Writing the silence when the buffer started to get empty, but before it actually did, ensured that the buffer always had some data to spare if the processing fell behind a little. Also, just a single small gap in playback is very hard to notice compared to many periodic gaps.
I don't know if this will work for you, but it should be easy to implement and try out.

Resources