Which is the best way to reduce size fast and decompress it fast?
I will be using it for, real time video (sharing desktop).
And i don´t think a video compression will do any good there.
Speed and Latency is everything here, i am currently looking at LZ4 for compressing a bmp, as it´s pretty fast, but it starts to be quite the bottleneck at higher resolutions (1280x720), and i only mean in compression and decoding speed, not in CPU usage.
I prefer lossless, but if there is a lossy (transparent?) option i am willing to try it out of course.
EDIT:
I am looking at libjpeg-turbo, but i have no idea how to use it in c#, can´t seem to find any information on it.
Have you tried Red5 Media server? It's an open source software to stream videos.
Related
I have several month of h.265 video from a security cam. I like to look through quickly.
Problem: All videoplayers are only able to speed up up to 100x. Also it's 4k. So speedup is also limited by CPU power. I need 1000x speedup or more. So I need a player which is capable looking only on keyframes (to speed up processing) and also process only every x keyframe to make it really fast. This should work in a player. I don't want to reencode.
Any Ideas
cheers
Timo
Can anyone offer me pointers or tips on finding/creating a data compression algorithm that has a guaranteed compression ratio? Obviously this couldn't be a loss-less algorithm.
My question is similar to the one here, but there was no suitable answer:
Shrink string encoding algorithm
What I am trying to do is stream live audio over a wireless network (my own spec, not WiFi) with a tight bandwidth restriction. Let's say I have packets 60 bytes in size. I need an algorithm to compress these to, say, 35 bytes every time without fail. Reliable guaranteed compression to a fixed size is key. Audio quality is less of a priority.
Any suggestions or pointers? I may end up creating my own algorithm from scratch, so even if you don't know of any libraries or standard algorithms, I would be grateful for brilliant ideas of any kind!
It is good that you mentioned your use case: live audio.
There are many audio CODECs (COder-DECoder) that work exactly this way (constant bit-rate). For example, take a look at Opus. You can select bitrates from 6 kb/s to 510 kb/s, and frame sizes from 2.5 ms to 60 ms.
I used it in the past to tranfer audio over RF datalinks. You'll probably need to implement a de-jitter buffer as well (see more here). Also note that the internal clock of many sound cards is not accurate, and there may be a "drift" between the source and target rate (e.g. a 30mSec buffer may be played in 29.9mSec or 30.1mSec) and you may need to compensate for this as well.
I have a machine with 2x3 3ghz dual-core xeon and 4x10krpm scsi 320 disks in raid0.
The capture card is an osprey 560 64 bit pci card.
Operating system is currently Windows Server 2003.
The video-stream that I can open with VLC using direct show is rather nice quality.
However, trying to save this video-stream without loss of quality has proven quite difficult,
using the h264 codec I am able to achieve a satisfying quality, however, all 4 cores jump to 100% load after a few second and then it start dropping frames, the machine is not powerful enough for realtime encoding. I've not been able to achieve satisfying mpeg1 or 4 quality, no matter which bitrate I set..
Thing is, the disks in this machine are pretty fast even by todays standard, and they are bored.. I don't care about disk-usage, I want quality.
I have searched in vain for a way to pump that beautiful videostream that I see in VLC onto the disk for later encoding, I reckon the disks would be fast enough, or maybe something which would apply a light compression, enough that the disks can keep up, but not so much as to loose visible quality.
I have tried FFMPEG as it seems capable of streaming a yuv4 stream down to the disk, but ofcause FFMPEG is unable to open the dshow device ( same error as this guy Ffmpeg streaming from capturing device Osprey 450e fails )
Please recommend a capable and (preferably) software which can do this.
I finally found it out, it is deceptively simple:
Just uncheck the "transcode" option in the window where you can select an encoding preset.
2 gigabytes per minutes is a low price to pay for finally getting those precious memories off of old videotapes in the quality they deserve.
I'm looking for the fastest way to encode a webcam stream that will be viewable in a html5 video tag. I'm using a Pandaboard: http://www.digikey.com/product-highlights/us/en/texas-instruments-pandaboard/686#tabs-2 for the hardware. Can use gstreamer, cvlc, ffmpeg. I'll be using it to drive a robot, so need the least amount of lag in the video stream. Quality doesn't have to be great and it doesn't need audio. Also, this is only for one client so bandwidth isn't an issue. The best solution so far is using ffmpeg with a mpjpeg gives me around 1 sec delay. Anything better?
I have been asked this many times so I will try and answer this a bit generically and not just for mjpeg. Getting very low delays in a system requires a bit of system engineering effort and also understanding of the components.
Some simple top level tweaks I can think of are:
Ensure the codec is configured for the lowest delay. Codecs will have (especially embedded system codecs) a low delay configuration. Enable it. If you are using H.264 it's most useful. Most people don't realize that by standard requirements H.264 decoders need to buffer frames before displaying it. This can be upto 16 for Qcif and upto 5 frames for 720p. That is a lot of delay in getting the first frame out. If you do not use H.264 still ensure you do not have B pictures enabled. This adds delay to getting the first picture out.
Since you are using mjpeg, I don't think this is applicable to you much.
Encoders will also have a rate control delay. (Called init delay or vbv buf size). Set it to the smallest value that gives you acceptable quality. That will also reduce the delay. Think of this as the bitstream buffer between encoder and decoder. If you are using x264 that would be the vbv buffer size.
Some simple other configurations: Use as few I pictures as possible (large intra period).
I pictures are huge and add to the delay to send over the network. This may not be very visible in systems where end to end delay is in the range of 1 second or more but when you are designing systems that need end to end delay of 100ms or less, this and several other aspects come into play. Also ensure you are using a low latency audio codec aac-lc (and not heaac).
In your case to get to lower latencies I would suggest moving away from mjpeg and use at least mpeg4 without B pictures (Simple profile) or best is H.264 baseline profile (x264 gives a zerolatency option). The simple reason you will get lower latency is that you will get lower bitrate post encoding to send the data out and you can go to full framerate. If you must stick to mjpeg you have close to what you can get without more advanced features support from the codec and system using the open source components as is.
Another aspect is the transmission of the content to the display unit. If you can use udp it will reduce latency quite a lot compared to tcp, though it can be lossy at times depending on network conditions. You have mentioned html5 video. I am curious as to how you are doing live streaming to a html5 video tag.
There are other aspects that can also be tweaked which I would put in the advanced category and requires the system engineer to try various things out
What is the network buffering in the OS? The OS also buffers data before sending it out for performance reasons. Tweak this to get a good balance between performance and speed.
Are you using CR or VBR encoding? While CBR is great for low jitter you can also use capped vbr if the codec provides it.
Can your decoder start decoding partial frames? So you don't have to worry about framing the data before providing it to the decoder. Just keep pushing the data to the decoder as soon as possible.
Can you do field encoding? Halves the time from frame encoding before getting the first picture out.
Can you do sliced encoding with callbacks whenever a slice is available to send over the network immediately?
In sub 100 ms latency systems that I have worked in all of the above are used. Some of the features may not be available in open source components but if you really need it and are enthusiastic you could go ahead and implement them.
EDIT:
I realize you cannot do a lot of the above for a ipad streaming solution and there are limitations because of hls also to the latency you can achieve. But I hope it will prove useful in other cases when you need any low latency system.
We had a similar problem, in our case it was necessary to time external events and sync them with the video stream. We tried several solutions but the one described here solved the problem and is extremely low latency:
Github Link
It uses gstreamer transcode to mjpeg which is then sent to a small python streaming server. This has the advantage that it uses the tag instead of so it can be viewed by most modern browsers, including the iPhone.
As you want the <video> tag, a simple solution is to use http-launch. That
had the lowest latency of all the solutions we tried so it might work for you. Be warned that ogg/theora will not work on Safari or IE so those wishing to target the Mac or Windows will have to modify the pipe to use MP4 or WebM.
Another solution that looks promising, gst-streaming-server. We simply couldn't find enough documentation to make it worth pursuing. I'd grateful if somebody could ask a stackoverflow question about how it should be used!
We have to compress a ton o' (monochrome) image data and move it quickly. If one were to just use the parallelizeable stages of jpeg compression (DCT and run length encoding of the quantized results) and run it on a GPU so each block is compressed in parallel I am hoping that would be very fast and still yeild a very significant compression factor like full jpeg does.
Does anyone with more GPU / image compression experience have any idea how this would compare both compression and performance wise over using libjpeg on a CPU? (If it is a stupid idea, feel free to say so - I am extremely novice in my knowledge of cuda and the various stages of jpeg compression.) Certainly it will be less compression and hopefully(?) faster but I have no idea how significant those factors may be.
You could hardly get more compression in GPU - there are just no complex-enough algorithms which can use that MUCH power.
When working with simple alos like JPEG - it's so simple that you'll spend most of the time transferring data via PCI-E bus (which has significant latency, especially when card does not support DMA transfers).
Positive side is that if card have DMA, you can free up CPU for more important stuff, and get image compression "for free".
In the best case, you can get about 10x improvement on top-end GPU compared to top-end CPU provided that both CPU & GPU code is well-optimized.