What are default quantization tables for RTP/MJPEG stream? - ffmpeg

I've encountered a problem during decoding RTP/MJPEG stream from ip-camera.
As rfc2435 states, quantization tables (for Q values 1 <= Q <= 99) should be calculated from these default tables:
/*
* Table K.1 from JPEG spec.
*/
static const int jpeg_luma_quantizer[64] = {
16, 11, 10, 16, 24, 40, 51, 61,
12, 12, 14, 19, 26, 58, 60, 55,
14, 13, 16, 24, 40, 57, 69, 56,
14, 17, 22, 29, 51, 87, 80, 62,
18, 22, 37, 56, 68, 109, 103, 77,
24, 35, 55, 64, 81, 104, 113, 92,
49, 64, 78, 87, 103, 121, 120, 101,
72, 92, 95, 98, 112, 100, 103, 99
};
/*
* Table K.2 from JPEG spec.
*/
static const int jpeg_chroma_quantizer[64] = {
17, 18, 24, 47, 99, 99, 99, 99,
18, 21, 26, 66, 99, 99, 99, 99,
24, 26, 56, 99, 99, 99, 99, 99,
47, 66, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99
};
This algorithm leads to poor picture quality (vlc shows better). I've looked through ffmpeg sources, and have found similar algorithm but with different tables:
static const uint8_t default_quantizers[128] = {
/* luma table */
16, 11, 12, 14, 12, 10, 16, 14,
13, 14, 18, 17, 16, 19, 24, 40,
26, 24, 22, 22, 24, 49, 35, 37,
29, 40, 58, 51, 61, 60, 57, 51,
56, 55, 64, 72, 92, 78, 64, 68,
87, 69, 55, 56, 80, 109, 81, 87,
95, 98, 103, 104, 103, 62, 77, 113,
121, 112, 100, 120, 92, 101, 103, 99,
/* chroma table */
17, 18, 18, 24, 21, 24, 47, 26,
26, 47, 99, 66, 56, 66, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99
};
I've changed tables to ffmpeg tables and the picture now looks perfect.
So, why are these tables different from rfc2435? What am I missing?

Different tables work better for different content. Also better tables are found as time goes on. Finding the best table is really trial and error using human judges on quality, then making trade offs for what type of content you wish to optimize for. ffmpeg may also produce larger files. And the larger files may not have been acceptable when the jpeg spec was originally written.

The defaults are pre-calculated but you can also include your own for Q=100,
See my implementation # https://net7mma.codeplex.com/SourceControl/latest#Rtp/RFC2435Frame.cs

Related

Efficient calculations of paths with shared subpaths in directed graph

I need a recommendation for an efficient datastructure that suits my problem of calculating the path costs inside a directed graph under the contraint, that many paths sometimes share a same subpath for which I do not want to make the calculations twice.
Each number in these lists are refering to a node inside a directed graph. Each line describes one path:
[121, 85, 135, 99, 141, 134, 4, 33, 65, 131, 18, 127],
[121, 85, 135, 99, 141, 134, 65, 33, 4, 127],
[121, 85, 135, 99, 141, 134, 65, 33, 4, 131, 18, 127],
[121, 85, 135, 99, 141, 134, 65, 33, 4, 107, 127],
[121, 85, 135, 99, 141, 134, 65, 23, 18, 127],
[121, 85, 135, 99, 141, 134, 65, 23, 18, 131, 4, 127],
[121, 85, 135, 99, 141, 134, 65, 23, 18, 131, 4, 107, 127],
[121, 85, 135, 99, 141, 134, 65, 107, 4, 127],
[121, 85, 135, 99, 141, 134, 65, 107, 4, 131, 18, 127],
[121, 85, 135, 99, 141, 134, 65, 107, 127],
[121, 85, 135, 99, 141, 134, 65, 131, 18, 127],
[121, 85, 135, 99, 141, 134, 65, 131, 4, 127],
[121, 85, 135, 99, 141, 134, 65, 131, 4, 107, 127],
[121, 85, 135, 99, 141, 4, 127],
...
As one can see in this example many paths share the same subset of nodes along the way. (And many other paths [not shown here] do not share a subpath.)
I want to compute the 'optimality' of each path, i.e. the sum OR products of the weights along the way. These weights can be seen as constant for my graph while traversing each path.
To be clear: I want to avoid calculating the weight of the path from node 121 to node 141 for all shown paths in this example, as these multiplications / additions of the weights between two nodes is the same (up to node 141) in each of the shown paths...
I am satisfied with a datastructure recommendation and explanation why the mentioned datastructure is best suited for my needs.
If you also have a library recommendation I would prefer one in C/C++ or Python.

Generating identical random numbers in sequence after time seed? (Running on my machine)

I'm trying to understand precisely why, when called from an external function, my time seeded random number generator returns sequences of identical numbers.
Minimal working example of issue:
package main
import (
"fmt"
"math/rand"
"time"
)
//Generates random int as function of range
func getRand(Range int) int {
s1 := rand.NewSource(time.Now().UnixNano())
r1 := rand.New(s1)
return r1.Intn(Range)
}
//Print 100 random ints between 0 and 100
func main() {
for i := 0; i < 100; i++ {
fmt.Print(getRand(100), ", ")
}
}
The output of this is
Out[1]: 40, 40, 40, 40, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 34,
34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 47,
47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47,
47,47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 47, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99,
99, 99, 99, 99, 99, 99, 99,
I'd like to know why this is happening for my own education. I'm also open to suggestions for a solution.
Details: I need to call random numbers in lots of external functions of my code but, like this MWE, when seeded within a function other than main they repeatedly return the same numbers. Additionally, I need to dynamically update the range, so generating lists a priori is not an option. I would rather not have to generate the numbers in main() and pass them into each function-- ranges are calculated inside these and it would complicate things
This is because time.Time has a granularity (which is 1 nanosecond) just like your system clock (which might even be multiple milliseconds–depends on many things), and if you call time.Now() multiple times within the greater of these granularities, chances are the returned time.Time will be the same, meaning its Time.UnixNano() method will return you the same nanoseconds (the same number).
And if you use the same number as the seed, the random number generator is ought to return the same numbers.
You only need to seed the RNG once, on app startup, not before each use. You may use a package init() function for that, or in the variable declaration:
var r = rand.New(rand.NewSource(time.Now().UnixNano()))
//Generates random int as function of range
func getRand(Range int) int {
return r.Intn(Range)
}
//Print 100 random ints between 0 and 100
func main() {
for i := 0; i < 100; i++ {
fmt.Print(getRand(100), ", ")
}
}
Example output (try it on the Go Playground):
0, 28, 27, 62, 63, 89, 24, 27, 88, 84, 82, 55, 49, 35, 2, 32, 84, 58, 78, 28, 26, 58, 30, 28, 74, 6, 39, 24, 40, 47, 49, 39, 61, 62, 67, 7, 94, 87, 37, 99, 90, 80, 93, 83, 27, 69, 25, 45, 99, 12, 44, 39, 34, 86, 18, 42, 76, 40, 44, 12, 70, 3, 70, 99, 57, 43, 90, 65, 97, 64, 68, 60, 65, 56, 3, 81, 54, 56, 43, 57, 92, 93, 54, 92, 9, 86, 16, 72, 29, 12, 97, 87, 55, 42, 87, 41, 94, 53, 23, 64,
One thing to note here: rand.NewSource() returns a source which is not safe for concurrent use. If you need to call getRand() from multiple goroutines, you need to synchronize access to r, or use a separate rand.Rand in each goroutine.
I'm not a Go expert but I think the problem is a generic programming issue. It is related to the fact you set the seed to each call. The seed is based on time function. So what is happening is that in a very short time, you have a number of calls while the time is not changed (yet) and so you get the same value because you are setting the same seed again and again.
Try to set the seed, once only and outside the for loop of calls.

Ruby 2.3.1 sort_by function change?

Here's a small ruby script:
p "ruby #{ RUBY_VERSION }p#{ RUBY_PATCHLEVEL }"
p 100.times.collect{|i| i}.sort_by{|j| j % 1}
I would have expect the same result from a version to another. In my case, it's not. Here's the results
"ruby 2.2.3p173"
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99
"ruby 2.3.1p112"
[99, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 0]
Is that normal?
Ruby doesn't guarantee you a sort order if the items are the same.
As to why the result changed between versions, this looks like a relevant change: ruby 2.3 tries to use a c-standard library provided implementation of quicksort in more cases than before.
Have a look at Is sort in Ruby stable?. The quick answer is no, it's not. What this means is that if two values are equivalent, in your case always equal to 0, you can't make any assumptions about where they go in relation to one another.

Associating values from one array to another in Ruby

I'm building a simple program after learning a little bit of Ruby. I'm trying to associate values from one array to another here's what I've got so far.
ColorValues = ["Black", "Brown", "Red", "Orange", "Yellow", "Green", "Cyan", "Blue", "Violet", "Pink", "Grey"]
(0..127).each_slice(12) {|i| p i}
.each_slice returns these arrays
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
[24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]
[36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]
[48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71]
[72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83]
[84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95]
[96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]
[108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119]
[120, 121, 122, 123, 124, 125, 126, 127]
What I'm attempting to do is then take the returned arrays and associate them with each color in the ColorValues[] array
i.e.
"Black" = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
"Brown" = [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
I'm sure there's a simple way of doing this I'm just not sure how to go about doing it.
Using Enumerable#zip and Hash::[]
colorValues = ["Black", "Brown", "Red", "Orange", "Yellow", "Green", "Cyan", "Blue", "Violet", "Pink", "Grey"]
Hash[colorValues.zip((0..127).each_slice(12))]
# => {"Black"=>[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
# "Brown"=>[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
# "Red"=>[24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35],
# "Orange"=>[36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47],
# "Yellow"=>[48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59],
# "Green"=>[60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71],
# "Cyan"=>[72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83],
# "Blue"=>[84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95],
# "Violet"=>[96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107],
# "Pink"=>[108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119],
# "Grey"=>[120, 121, 122, 123, 124, 125, 126, 127]}
You can use zip for that:
ColorValues.zip( (0..127).each_slice(12) )
Method #zip is convenient, but it turns out that your problem is so frequent in Ruby, that I have overloaded #>> operator on Array class to perform zipping into a hash. First, install y_support gem. Then,
require 'y_support/core_ext/array' # or require 'y_support/all'
h = ColorValues >> ( 0..127 ).each_slice( 12 ) # returns a hash like in falsetru's answer
h["Black"] #=> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]

sort an array of arrays based on number of occurences in ruby

I want this integer array to be sorted in the right order based on its number of occurrences.
question = [[1, 7, 8, 9, 10, 11, 12, 19, 20, 21, 31, 32, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 129, 132, 133, 134, 135, 136, 139], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 129, 130, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141], [30], [77]]
question.flatten.uniq.size = 90
answer = sort_it(question)
answer = [77, 68, 8, 9, 10, 11, 12, 19, 20, 21, 31, 139, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 135, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 136, 66, 67, 7, 70, 71, 72, 73, 74, 75, 76, 1, 78, 79, 81, 129, 132, 133, 134, 45, 65, 32, 2, 3, 4, 5, 6, 13, 14, 15, 16, 17, 22, 23, 24, 25, 26, 27, 29, 33, 41, 69, 130, 137, 138, 140, 141, 30]
answer.uniq.size = 90
Here is my Ruby code:
def sort_it(actual)
join=[]
buffer = actual.dup
final = [ ]
(actual.size-2).downto(0) {|j|
join.unshift(actual.map{|i| i }.inject(:"&"))
actual.pop
}
ordered_join = join.reverse.flatten
final << ordered_join
final << buffer.flatten - ordered_join
final.flatten
end
Is this approach OK? Is there a more efficient approach?
EDIT:
As a tribute to tokland and niklas, edited the answer which was in the wrong order before.
Thanks!
Use group_by:
question.flatten.group_by{|x| x}.sort_by{|k, v| -v.size}.map(&:first)
Answer with sort_by{|k, v| -v.size} calls v.size every time elements are compared. More effective solution:
question.flatten.group_by(&:to_i).map{|k,v| [k, -v.size]}.sort_by(&:last).map(&:first)
Though size of array is easy to get, it is unnecessary expense (O(sorting algorithm) instead of O(n)), and this idiom is good to remember anyway for more expensive operations

Resources