Can anyone offer me pointers or tips on finding/creating a data compression algorithm that has a guaranteed compression ratio? Obviously this couldn't be a loss-less algorithm.
My question is similar to the one here, but there was no suitable answer:
Shrink string encoding algorithm
What I am trying to do is stream live audio over a wireless network (my own spec, not WiFi) with a tight bandwidth restriction. Let's say I have packets 60 bytes in size. I need an algorithm to compress these to, say, 35 bytes every time without fail. Reliable guaranteed compression to a fixed size is key. Audio quality is less of a priority.
Any suggestions or pointers? I may end up creating my own algorithm from scratch, so even if you don't know of any libraries or standard algorithms, I would be grateful for brilliant ideas of any kind!
It is good that you mentioned your use case: live audio.
There are many audio CODECs (COder-DECoder) that work exactly this way (constant bit-rate). For example, take a look at Opus. You can select bitrates from 6 kb/s to 510 kb/s, and frame sizes from 2.5 ms to 60 ms.
I used it in the past to tranfer audio over RF datalinks. You'll probably need to implement a de-jitter buffer as well (see more here). Also note that the internal clock of many sound cards is not accurate, and there may be a "drift" between the source and target rate (e.g. a 30mSec buffer may be played in 29.9mSec or 30.1mSec) and you may need to compensate for this as well.
Related
I am looking for the optimal way to compress a large mp3 that contains >80% silence. It seems like mp3s consume nearly the same amount of space for a given duration independent of the content of the file. Are there any other compression formats that would do a better job of reducing the file size without significantly affecting quality of the non-silent parts?
For any lossy codec in this mode (MP3 included), be sure you're encoding with variable bitrate (VBR). If you use a constant bitrate (CBR), the codec is going to output a (mostly) constate rate of data regardless of what the input was.
Without knowing more about the problem you're trying to solve, it's hard to give a specific solution. The best generally available codec these days is Opus, but it isn't the most compatible. AAC is also quite good and is widely compatible. If you have true digital silence, FLAC takes up zero bandwidth during silent parts. It's lossless though, and naturally requires a lot of bandwidth during the non-silent parts.
I am new in web developer. I wanted to know about the performance of video in web. My question is Which parameters decide the performance of video online/watching websites? anybody can tell.
When you're streaming video over a network connection, there are two main reasons why a video might perform poorly: network and computing power. Either the network couldn't retrieve the data in time, or the computer the browser is running on couldn't decode and render it fast enough. The former is much more common.
The major properties of a video that would affect this:
Bitrate:
Expressed in Kbps or Mbps, most people think this is a measurement of quality, but it's not. Rather, bitrate is a measurement of how much data is used to represent a second of video. A larger bitrate means a bigger file for the same runtime, and assuming limited bandwidth, this is the single most important factor in determining how your video will perform.
Codec:
The codec refers to the specific algorithm used to encode and compress moving picture data into bits. The main features affected are file size and video quality, (which in turn affects the bitrate), but some codecs are also more challenging to render than others, leading to poor performance on an older or burdened system even when the network bandwidth isn't an issue. Again, note that a video requiring too much network is much more common than a video requiring too much computer.
For the end user who is watching the video, there are a few factors that are not part of the videos themselves that can impact performance:
The network:
Obviously, a user has to have a certain amount of bandwidth consistently available to stream video at a given quality level, so they won't be able to play much while downloading from a fast server or running Tor, but the server also needs to be able to deliver the bits to everyone who's asking for them. The quality level of the video that can play without stuttering can be drastically reduced by network congestion, disparity in geographical location between the client and the server, denial of service (i.e., things not responding), or any other factor that keeps all the viewers from retrieving bits consistently as the video plays. This is a tough challenge, and there's a whole industry of Content Delivery Networks (CDNs) devoted to the problem of how to deliver a large amount of data can get to a large number of people in many different places on the globe as fast as possible.
Their computer/device:
As codecs have gotten more advanced, they've been able to do better, more complex math to turn pictures into bits. This has made file sizes smaller and quality higher, but it's also made the videos more computationally expensive to decode. Turning bits back into video takes horsepower, and older computers, less powerful devices, and systems that are just doing too much at the moment may be unable to decode video delivered at a certain bitrate.
There are a few other video properties relevant to performance, but mostly these end up affecting the bitrate. Resolution is an example of this: a video encoded at a native resolution of 1600x900 will be harder to stream than a video encoded at 320x240, but since the higher resolution takes up more space (i.e., requires more bits) to store than the lower resolution does for the same length of video, the difference ends up being reflected in the bitrate.
The same is true of file size: it doesn't really matter how big the file is in total; the important number is the bitrate -- the amount of space/bandwidth one second of video takes up.
I think those are the major factors that determine whether a certain video will perform well for a particular user requesting from a specific computer at a given network location.
Is it simply a question of adjusting the amount of prebuffered content depending on network speed? Do you adjust for this once at the beginning, every second...?
Or is it more complicated - sampling a history of recordings of your network speed and taking the mean / median and adjusting on that??
Your second paragraph sums it up pretty well.
The client looks at how fast the previous chunk of audio/video (usually just a second or two's worth) downloaded, then requests a bitrate of video it thinks it can handle downloading fast enough. It always buffers (downloads) at least several seconds into the future, to give itself leeway in case the next chunk of audio/video downloads slower than expected.
Note that every combination of bitrate and resolution needs to be encoded separately. They're usually pre-encoded and stored on the server. So how many bitrates there are to choose from, and what they are, completely depends on whoever encoded and/or is hosting the content.
I'm looking for the fastest way to encode a webcam stream that will be viewable in a html5 video tag. I'm using a Pandaboard: http://www.digikey.com/product-highlights/us/en/texas-instruments-pandaboard/686#tabs-2 for the hardware. Can use gstreamer, cvlc, ffmpeg. I'll be using it to drive a robot, so need the least amount of lag in the video stream. Quality doesn't have to be great and it doesn't need audio. Also, this is only for one client so bandwidth isn't an issue. The best solution so far is using ffmpeg with a mpjpeg gives me around 1 sec delay. Anything better?
I have been asked this many times so I will try and answer this a bit generically and not just for mjpeg. Getting very low delays in a system requires a bit of system engineering effort and also understanding of the components.
Some simple top level tweaks I can think of are:
Ensure the codec is configured for the lowest delay. Codecs will have (especially embedded system codecs) a low delay configuration. Enable it. If you are using H.264 it's most useful. Most people don't realize that by standard requirements H.264 decoders need to buffer frames before displaying it. This can be upto 16 for Qcif and upto 5 frames for 720p. That is a lot of delay in getting the first frame out. If you do not use H.264 still ensure you do not have B pictures enabled. This adds delay to getting the first picture out.
Since you are using mjpeg, I don't think this is applicable to you much.
Encoders will also have a rate control delay. (Called init delay or vbv buf size). Set it to the smallest value that gives you acceptable quality. That will also reduce the delay. Think of this as the bitstream buffer between encoder and decoder. If you are using x264 that would be the vbv buffer size.
Some simple other configurations: Use as few I pictures as possible (large intra period).
I pictures are huge and add to the delay to send over the network. This may not be very visible in systems where end to end delay is in the range of 1 second or more but when you are designing systems that need end to end delay of 100ms or less, this and several other aspects come into play. Also ensure you are using a low latency audio codec aac-lc (and not heaac).
In your case to get to lower latencies I would suggest moving away from mjpeg and use at least mpeg4 without B pictures (Simple profile) or best is H.264 baseline profile (x264 gives a zerolatency option). The simple reason you will get lower latency is that you will get lower bitrate post encoding to send the data out and you can go to full framerate. If you must stick to mjpeg you have close to what you can get without more advanced features support from the codec and system using the open source components as is.
Another aspect is the transmission of the content to the display unit. If you can use udp it will reduce latency quite a lot compared to tcp, though it can be lossy at times depending on network conditions. You have mentioned html5 video. I am curious as to how you are doing live streaming to a html5 video tag.
There are other aspects that can also be tweaked which I would put in the advanced category and requires the system engineer to try various things out
What is the network buffering in the OS? The OS also buffers data before sending it out for performance reasons. Tweak this to get a good balance between performance and speed.
Are you using CR or VBR encoding? While CBR is great for low jitter you can also use capped vbr if the codec provides it.
Can your decoder start decoding partial frames? So you don't have to worry about framing the data before providing it to the decoder. Just keep pushing the data to the decoder as soon as possible.
Can you do field encoding? Halves the time from frame encoding before getting the first picture out.
Can you do sliced encoding with callbacks whenever a slice is available to send over the network immediately?
In sub 100 ms latency systems that I have worked in all of the above are used. Some of the features may not be available in open source components but if you really need it and are enthusiastic you could go ahead and implement them.
EDIT:
I realize you cannot do a lot of the above for a ipad streaming solution and there are limitations because of hls also to the latency you can achieve. But I hope it will prove useful in other cases when you need any low latency system.
We had a similar problem, in our case it was necessary to time external events and sync them with the video stream. We tried several solutions but the one described here solved the problem and is extremely low latency:
Github Link
It uses gstreamer transcode to mjpeg which is then sent to a small python streaming server. This has the advantage that it uses the tag instead of so it can be viewed by most modern browsers, including the iPhone.
As you want the <video> tag, a simple solution is to use http-launch. That
had the lowest latency of all the solutions we tried so it might work for you. Be warned that ogg/theora will not work on Safari or IE so those wishing to target the Mac or Windows will have to modify the pipe to use MP4 or WebM.
Another solution that looks promising, gst-streaming-server. We simply couldn't find enough documentation to make it worth pursuing. I'd grateful if somebody could ask a stackoverflow question about how it should be used!
I want to store a large amount of data onto my Arduino with a ATmega168/ATmega328 microcontroller, but unfortunately there's only 256 KB / 512 KB of EEPROM storage.
My idea is to make use of an compression algorithm to strip down the size. But well, my knowledge on compression algorithms is quite low and my search for ready-to-use libraries failed.
So, is there a good way to optimize the storage size?
You might have a look at the LZO algorithm, which is designed to be lightweight. I don't know whether there are any implementations for the AVR system, but it might be something you could implement yourself.
You may be somewhat misinformed about the amount of storage available in EEPROM on your chip though; according to the datasheet I have the EEPROM sizes are:
ATmega48P: 256
ATmega88P: 512
ATmega168P: 512
ATmega256P: 1024
Note that those values are in bytes, not KB as you mention in your question. This is not, by any measure, a "shitload".
AVRs only have a few kilobytes of EEPROM at the most, and very few have many more than 64K Flash (no standard Arduinos do).
If you are needing to store something and seldom modify, for instance an image, you could try using the Flash as there is much more space there to work with. For simple images, some crude RLE encoding would go a long way.
Compressing anything more random, for instance logged data, audio, etc, will take a tremendous amount of overhead for the AVR, you will have better luck getting a serial EEPROM chip to hold this data. Arduino's site has a page on interfacing with a 64K chip, which sounds . If you want more than that, look at interfacing with a SD card with SPI, for instance in this audio shield
A NASA study here (Postscript)
A repost of 1989 article on LZW here
Keep it simple and perform analysis of the cost/payout of adding compression. This includes time and effort, complexity, resource usage, data compressibility, etc.
An algorithm something like LZSS would probably be a good choice for an embedded platform. They are simple algorithms, and don't need much memory.
LZS is one I'm familiar with. It uses a 2 kB dictionary for compression and decompression (the dictionary is the most recent 2 kB of the uncompressed data stream). (LZS was patented by HiFn, however as far as I can tell, all patents have expired.)
But I see that an ATmega328, used on recent Arduinos, only has 512 bytes to 2 kB SRAM, so maybe even LZS is too big for it. I'm sure you could use a variant with a smaller dictionary, but I'm not sure what compression ratios you'd achieve.
The method described in the paper “Data Compression Algorithms for Energy-Constrained Devices in Delay Tolerant Networks” might run on an ATmega328.
Reference: C. Sadler and M. Martonosi, “Data Compression Algorithms for Energy-Constrained Devices in Delay Tolerant Networks,” Proceedings of the ACM Conference on Embedded Networked Sensor Systems (SenSys) 2006, November 2006. .pdf.
S-LZW Source for MSPGCC: slzw.tar.gz. Updated 10 March 2007.
You might also want to take a look at LZJB, being very short, simple, and lightweight.
Also, FastLZ might be worth a look. It gets better compression ratios than LZJB and has pretty minimal memory requirements for decompression:
If you just want to remove some repeating zero's or such, use Run-length encoding
Repeating byte sequences will be stored as:
<mark><byte><count>
It's super-simple algorithm, which you can probably code yourself in few lines of code.
Is an external EEPROM (for example via I2C) not an option? Even if you use a compression algorithm the down side is that the size of data you may store in the internal EEPROM may not be determined in a simple way any more..
And of corse, if you really mean kBYTES, then consider a SDCard connected to the SPI... There are some light weighted open source FAT-compatible file systems in the net.
heatshrink is a data compression/decompression library for embedded/real-time systems based on LZSS. It says it can run in under 100 bytes of memory.