ActionScript 3 Read JPEG quality - image

I am developing image uploader for Flash 10. Is there a way to read jpeg quality of the browsed images.

Unfortunately, it can't be done directly:
The quality factor is not stored
directly in the JPEG file, so you
cannot read the quality factor from
the file. (from: Microsoft support pages...)
In more detail:
The quantization table that was used
to compress an image is stored in
the JFIF header, but the JPEG Quality
Factor that was used to generate the
quantization table is not stored along
with the image and hence the original
JPEG Quality Factor is lost. (from: JPEG Compression Metrics as a Quality Aware Image Transcoding, by Surendar Chandra and Carla Schlatter Ellis)
The above quote is from a paper which discusses ways to estimate the level of compression (by examining the quantization tables used in the image), but it doesn't look easy to implement: there's an example here which is part of the Image Magick codebase, but it's written in C.
Image Magick has been ported to Haxe, which can be complied into Flash code, so conceivably you could get something working, but I'm afraid it's beyond my skills to explain how!
EDIT: just found a similar question on SuperUser, which also mentions Image Magick.
EDIT: you might also be interested in the answers to this question, which asked how to get the size of an image without loading the whole file (good for dealing with images bigger than Flash can handle).

I have used the libjpeg to finish this job, maybe you can refer to my code
#include <stdio.h>
#include <math.h>
#include "jpeglib.h"
#include <setjmp.h>
static const unsigned int std_luminance_quant_tbl[DCTSIZE2] = {
16, 11, 10, 16, 24, 40, 51, 61,
12, 12, 14, 19, 26, 58, 60, 55,
14, 13, 16, 24, 40, 57, 69, 56,
14, 17, 22, 29, 51, 87, 80, 62,
18, 22, 37, 56, 68, 109, 103, 77,
24, 35, 55, 64, 81, 104, 113, 92,
49, 64, 78, 87, 103, 121, 120, 101,
72, 92, 95, 98, 112, 100, 103, 99
};
int ReadJpegQuality(const char *filename)
{
FILE * infile = fopen(filename, "rb");
fseek(infile,0,SEEK_END);
size_t sz = ftell(infile);
fseek(infile,0,SEEK_SET);
unsigned char* buffer = new unsigned char[sz];
fread(buffer,1,sz,infile);
fclose(infile);
struct jpeg_decompress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_decompress(&cinfo);
jpeg_mem_src(&cinfo,(unsigned char*)buffer,sz);
jpeg_read_header(&cinfo, TRUE);
int tmp_quality = 0;
int linear_quality = 0;
const int aver_times = 3;
int times = 0;
int aver_quality = 0;
for(int i=0;i<DCTSIZE2;i++)
{
long temp = cinfo.quant_tbl_ptrs[0]->quantval[i];
if(temp<32767L&&temp>0)
{
linear_quality = ceil((float)(temp*100L - 50L)/std_luminance_quant_tbl[i]);
if(linear_quality==1) tmp_quality = 1;
else if(linear_quality==100) tmp_quality = 100;
else if(linear_quality>100)
{
tmp_quality = ceil((float)5000/linear_quality);
}
else
{
tmp_quality = 100 - ceil((float)linear_quality/2);
}
aver_quality += tmp_quality;
if(aver_times==++times)
{
aver_quality /= aver_times;
break;
}
}
}
jpeg_destroy_decompress(&cinfo);
return aver_quality;
}
int main(int argc,char** argv)
{
printf("quality: %d\n",ReadJpegQuality("test1.jpg"));
return 0;
}
this method is using the libjpeg to read jpg file's quantization table,then use the quantization table calculate the quality arguments.

Related

How to emulate normal distribution data by actual, real life events, instead of using a math formula?

I am trying to generate some Bell Shape data (Normal Distribution). There are some math formula to achieve that, but I am hoping to emulate it by natural, daily events that happen in real life.
For example, I am saying, for 50 students, assuming they have a 70% chance of getting a question in a multiple choice exam correct, for 100 questions. So what score does each student get? I have the code in JavaScript:
students = Array.from({ length: 50 });
students.forEach((s, i, arr) => {
let score = 0;
for (let i = 0; i < 100; i++) {
if (Math.random() >= 0.3) score++;
}
arr[i] = score;
});
console.log(students);
But the result doesn't seem like a normal distribution. For example, I got:
[
69, 70, 67, 64, 71, 72, 77, 70, 71, 64, 74,
74, 73, 80, 69, 68, 67, 72, 69, 70, 61, 72,
72, 75, 63, 68, 71, 69, 76, 70, 69, 69, 67,
63, 65, 80, 70, 62, 68, 63, 73, 69, 64, 79,
79, 72, 72, 70, 70, 66
]
There is no student who got a score of 12 or 20, and there is no student who got a score of 88 or 90 or 95 (the students who can get an A grade). Is there a way to emulate a real life event to generate normal distribution data?
Two issues:
100 students may be a bit too small a sample to produce such a pattern; 10000 students will give a better view.
You can better visualise the statistics by counting the number of students that have a given score. So you would get a count per potential score (0..100).
And now you can see the Bell curve:
let students = Array.from({ length: 10000 });
let studentsWithScore = Array(101).fill(0);
students.forEach(() => {
let score = 0;
for (let i = 0; i < 100; i++) {
if (Math.random() >= 0.3) score++;
}
studentsWithScore[score]++;
});
console.log(studentsWithScore);

Print ASCII table without loop

I just had an interview and I have been asked a question: How would you print all the ASCII table characters without using a loop. The language doesn't matter.
The only method for doing so that comes to my mind is by using recursion instead of loops. An algorithm for doing this will be something like:
void printASCII(int i){
if(i == 128)
return;
print(i + " " + ((char)i) + "\n");
printASCII(i + 1);
}
You should call the previous function using:
printASCII(0);
This will print the complete ASCII table, where each line contains the index followed by a space and the actual ASCII character.
I don't think you can find any other way to do so, specially that it clearly says:
The language doesn't matter
This usually means that the question is about an algorithmic idea, rather than being specific for any language.
Two other approaches that weren't mentioned:
The obvious:
#include <stdio.h>
int main() {
printf("%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c%c", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127);
}
The power of two tree (probably intended by the interviewer):
#include <stdio.h>
int c;
#define a128 a64; a64;
#define a64 a32; a32;
#define a32 a16; a16;
#define a16 a8; a8;
#define a8 a4; a4;
#define a4 a2; a2;
#define a2 a; a;
#define a printf("%c", c++);
int main() {
c = 0;
a128
}

Quicksort partitioning

I am sorry to be dumb, but I have been struggling with my own quicksort implementation for quiet a while. To be more specific, I can't get my partition procedure to work properly. Ridiculously enough, but I've also tried to almost directly copy an implementation from Sedjewick's book, with no success, however.
Here is my code:
void partition(int a[], int size)
{
int i, j = size - 1;
int t, pivot = a[j / 2];
i = 0;
for (;;)
{
while (a[i] < pivot)
i++;
while (a[j] > pivot)
j--;
if (i >= j)
break;
t = a[i];
a[i++] = a[j];
a[j] = t;
}
}
Here is the example of input:
82, 65, 59, 10, 35, 51, 81, 47, 25, 64, 34, 38, 12, 38, 58, 74, 37, 42, 63, 18,
75, 67, 36, 77, 47, 48, 13, 91, 94, 52
The pivot here is 58, but I get wrong output:
52, 13, 48, 10, 35, 51, 47, 47, 25, 36, 34, 38, 12, 38, 18, 58, 37, 42, 63, 74,
75, 67, 64, 77, 81, 59, 65, 91, 94, 82
It looks almost correct, with a little exception of 37 and 42 going right after 58. I've tried a lot of variations of partitioning procedure, but they all get me similar results.
EDIT
My previous answer seemed to fix the issue but wasn't correct.
Your output is fine. Let me visually indicate the partition with vertical bars:
52, 13, 48, 10, 35, 51, 47, 47, 25, 36, 34, 38, 12, 38, 18, 58, 37, 42 || 63, 74, 75, 67, 64, 77, 81, 59, 65, 91, 94, 82
Everything to the left of the vertical bars is <= 58, and everything to the right is >= 58. This is what's expected from the partition step in a quicksort.
You do, however, need to decrement j in addition to incrementing i:
t = a[i];
a[i++] = a[j];
a[j--] = t; // added a decrement here
Other than that, the only thing you're missing is returning the partition index. Simply return the value of i at the end of the function and use that as the array boundary in your recursive step.
Replace a[j] = t; with a[j--] = t

How to initialize __m128i array statically in gcc?

I am porting some SSE optimization code from Windows to Linux. And I found that the following code, which works well in MSVC, won't work in GCC.
The code is to initialize an array of __m128i. Each __mi28i contains 16 int8_t. It does compile with gcc but the result is not as expected.
Actually, as gcc defines __m128i as long long int, the code will initialize an array like:
long long int coeffs_ssse3[4] = {64, 83, 64, 36}.
I googled and was told that "The only portable way to initialize a vector is to use _mm_set_XXX intrinsics." However, I want to know is there any other way to initialize the __m128i array? Better statically, and don't need to modify the following code much (since I have tons of code in the following format). Any suggestion is appreciated.
static const __m128i coeffs_ssse3[4] =
{
{ 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0},
{ 83, 0, 36, 0,-36,-1,-83,-1, 83, 0, 36, 0,-36,-1,-83, -1},
{ 64, 0,-64,-1,-64,-1, 64, 0, 64, 0,-64,-1,-64,-1, 64, 0},
{ 36, 0,-83,-1, 83, 0,-36,-1, 36, 0,-83,-1, 83, 0,-36,-1}
};
It seems that gcc doesn't treat the __m128* types as being candidates for aggregate initialization. Since they aren't standard types, this behavior will vary from compiler to compiler. One approach would be to declare the array as an aligned array of 8-bit integers, then just cast a pointer to it:
static const int8_t coeffs[64] __attribute__((aligned(16))) =
{
64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0,
83, 0, 36, 0,-36,-1,-83,-1, 83, 0, 36, 0,-36,-1,-83, -1,
64, 0,-64,-1,-64,-1, 64, 0, 64, 0,-64,-1,-64,-1, 64, 0,
36, 0,-83,-1, 83, 0,-36,-1, 36, 0,-83,-1, 83, 0,-36,-1
};
static const __m128i *coeffs_ssse3 = (__m128i *) coeffs;
However, I don't think this syntax (__attribute__((aligned(x)))) is supported by Visual Studio, so you would need some #ifdef trickery in there to use the right directives to achieve the alignment that you want on all of your target platforms.

JavaCV FFmpegFrameRecorder properties explanation needed

I'm using FFmpegFrameRecorder to get the video input from my webcam and record it into a video file. The problem is that I'm building my application using a few different demo source codes that I found and I use properties some of which are not completely clear to me.
First, here is my code snippet :
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(FILENAME, grabber.getImageWidth(),grabber.getImageHeight());
recorder.setVideoCodec(13);
recorder.setFormat("mp4");
recorder.setPixelFormat(avutil.PIX_FMT_YUV420P);
recorder.setFrameRate(30);
recorder.setVideoBitrate(10 * 1024 * 1024);
recorder.start();
setVideoCodec(13) - What is the meaning of this (13) how can I understand what actual codec stands behind any number?
setPixelFormat - Just get this, don't know what it's doing in general
setFrameRate(30) - I think this should be pretty clear but still what is the logic behind what frame rate we choose (isn't the high the better?)
setVideoBitrate(10*1024*1024) - again almost no idea what this does and what's the logic behind the numbers?
At the end I just want to mention one last problem that I get recording video like this. If the actual length of the video is let's say 20secs. When I play the video file created from the program it runs significantly faster. Can't tell if it's exactly 2 times faster than it should be but in general if I record a 20sec video then it's played for about 10secs. What may cause this and how can I fix it?
VideoCodec can be chosen from this list found in avcodec.h/avcodec.java (As you can see, the number 13 gets us MPEG4, and there are others, but FFmpeg doesn't have an encoder for all of them):
AV_CODEC_ID_MPEG1VIDEO = 1,
/** preferred ID for MPEG-1/2 video decoding */
AV_CODEC_ID_MPEG2VIDEO = 2,
AV_CODEC_ID_MPEG2VIDEO_XVMC = 3,
AV_CODEC_ID_H261 = 4,
AV_CODEC_ID_H263 = 5,
AV_CODEC_ID_RV10 = 6,
AV_CODEC_ID_RV20 = 7,
AV_CODEC_ID_MJPEG = 8,
AV_CODEC_ID_MJPEGB = 9,
AV_CODEC_ID_LJPEG = 10,
AV_CODEC_ID_SP5X = 11,
AV_CODEC_ID_JPEGLS = 12,
AV_CODEC_ID_MPEG4 = 13,
AV_CODEC_ID_RAWVIDEO = 14,
AV_CODEC_ID_MSMPEG4V1 = 15,
AV_CODEC_ID_MSMPEG4V2 = 16,
AV_CODEC_ID_MSMPEG4V3 = 17,
AV_CODEC_ID_WMV1 = 18,
AV_CODEC_ID_WMV2 = 19,
AV_CODEC_ID_H263P = 20,
AV_CODEC_ID_H263I = 21,
AV_CODEC_ID_FLV1 = 22,
AV_CODEC_ID_SVQ1 = 23,
AV_CODEC_ID_SVQ3 = 24,
AV_CODEC_ID_DVVIDEO = 25,
AV_CODEC_ID_HUFFYUV = 26,
AV_CODEC_ID_CYUV = 27,
AV_CODEC_ID_H264 = 28,
AV_CODEC_ID_INDEO3 = 29,
AV_CODEC_ID_VP3 = 30,
AV_CODEC_ID_THEORA = 31,
AV_CODEC_ID_ASV1 = 32,
AV_CODEC_ID_ASV2 = 33,
AV_CODEC_ID_FFV1 = 34,
AV_CODEC_ID_4XM = 35,
AV_CODEC_ID_VCR1 = 36,
AV_CODEC_ID_CLJR = 37,
AV_CODEC_ID_MDEC = 38,
AV_CODEC_ID_ROQ = 39,
AV_CODEC_ID_INTERPLAY_VIDEO = 40,
AV_CODEC_ID_XAN_WC3 = 41,
AV_CODEC_ID_XAN_WC4 = 42,
AV_CODEC_ID_RPZA = 43,
AV_CODEC_ID_CINEPAK = 44,
AV_CODEC_ID_WS_VQA = 45,
AV_CODEC_ID_MSRLE = 46,
AV_CODEC_ID_MSVIDEO1 = 47,
AV_CODEC_ID_IDCIN = 48,
AV_CODEC_ID_8BPS = 49,
AV_CODEC_ID_SMC = 50,
AV_CODEC_ID_FLIC = 51,
AV_CODEC_ID_TRUEMOTION1 = 52,
AV_CODEC_ID_VMDVIDEO = 53,
AV_CODEC_ID_MSZH = 54,
AV_CODEC_ID_ZLIB = 55,
AV_CODEC_ID_QTRLE = 56,
AV_CODEC_ID_TSCC = 57,
AV_CODEC_ID_ULTI = 58,
AV_CODEC_ID_QDRAW = 59,
AV_CODEC_ID_VIXL = 60,
AV_CODEC_ID_QPEG = 61,
AV_CODEC_ID_PNG = 62,
AV_CODEC_ID_PPM = 63,
AV_CODEC_ID_PBM = 64,
AV_CODEC_ID_PGM = 65,
AV_CODEC_ID_PGMYUV = 66,
AV_CODEC_ID_PAM = 67,
AV_CODEC_ID_FFVHUFF = 68,
AV_CODEC_ID_RV30 = 69,
AV_CODEC_ID_RV40 = 70,
AV_CODEC_ID_VC1 = 71,
AV_CODEC_ID_WMV3 = 72,
AV_CODEC_ID_LOCO = 73,
AV_CODEC_ID_WNV1 = 74,
AV_CODEC_ID_AASC = 75,
AV_CODEC_ID_INDEO2 = 76,
AV_CODEC_ID_FRAPS = 77,
AV_CODEC_ID_TRUEMOTION2 = 78,
AV_CODEC_ID_BMP = 79,
AV_CODEC_ID_CSCD = 80,
AV_CODEC_ID_MMVIDEO = 81,
AV_CODEC_ID_ZMBV = 82,
AV_CODEC_ID_AVS = 83,
AV_CODEC_ID_SMACKVIDEO = 84,
AV_CODEC_ID_NUV = 85,
AV_CODEC_ID_KMVC = 86,
AV_CODEC_ID_FLASHSV = 87,
AV_CODEC_ID_CAVS = 88,
AV_CODEC_ID_JPEG2000 = 89,
AV_CODEC_ID_VMNC = 90,
AV_CODEC_ID_VP5 = 91,
AV_CODEC_ID_VP6 = 92,
AV_CODEC_ID_VP6F = 93,
AV_CODEC_ID_TARGA = 94,
AV_CODEC_ID_DSICINVIDEO = 95,
AV_CODEC_ID_TIERTEXSEQVIDEO = 96,
AV_CODEC_ID_TIFF = 97,
AV_CODEC_ID_GIF = 98,
AV_CODEC_ID_DXA = 99,
AV_CODEC_ID_DNXHD = 100,
AV_CODEC_ID_THP = 101,
AV_CODEC_ID_SGI = 102,
AV_CODEC_ID_C93 = 103,
AV_CODEC_ID_BETHSOFTVID = 104,
AV_CODEC_ID_PTX = 105,
AV_CODEC_ID_TXD = 106,
AV_CODEC_ID_VP6A = 107,
AV_CODEC_ID_AMV = 108,
AV_CODEC_ID_VB = 109,
AV_CODEC_ID_PCX = 110,
AV_CODEC_ID_SUNRAST = 111,
AV_CODEC_ID_INDEO4 = 112,
AV_CODEC_ID_INDEO5 = 113,
AV_CODEC_ID_MIMIC = 114,
AV_CODEC_ID_RL2 = 115,
AV_CODEC_ID_ESCAPE124 = 116,
AV_CODEC_ID_DIRAC = 117,
AV_CODEC_ID_BFI = 118,
AV_CODEC_ID_CMV = 119,
AV_CODEC_ID_MOTIONPIXELS = 120,
AV_CODEC_ID_TGV = 121,
AV_CODEC_ID_TGQ = 122,
AV_CODEC_ID_TQI = 123,
AV_CODEC_ID_AURA = 124,
AV_CODEC_ID_AURA2 = 125,
AV_CODEC_ID_V210X = 126,
AV_CODEC_ID_TMV = 127,
AV_CODEC_ID_V210 = 128,
AV_CODEC_ID_DPX = 129,
AV_CODEC_ID_MAD = 130,
AV_CODEC_ID_FRWU = 131,
AV_CODEC_ID_FLASHSV2 = 132,
AV_CODEC_ID_CDGRAPHICS = 133,
AV_CODEC_ID_R210 = 134,
AV_CODEC_ID_ANM = 135,
AV_CODEC_ID_BINKVIDEO = 136,
AV_CODEC_ID_IFF_ILBM = 137,
AV_CODEC_ID_IFF_BYTERUN1 = 138,
AV_CODEC_ID_KGV1 = 139,
AV_CODEC_ID_YOP = 140,
AV_CODEC_ID_VP8 = 141,
AV_CODEC_ID_PICTOR = 142,
AV_CODEC_ID_ANSI = 143,
AV_CODEC_ID_A64_MULTI = 144,
AV_CODEC_ID_A64_MULTI5 = 145,
AV_CODEC_ID_R10K = 146,
AV_CODEC_ID_MXPEG = 147,
AV_CODEC_ID_LAGARITH = 148,
AV_CODEC_ID_PRORES = 149,
AV_CODEC_ID_JV = 150,
AV_CODEC_ID_DFA = 151,
AV_CODEC_ID_WMV3IMAGE = 152,
AV_CODEC_ID_VC1IMAGE = 153,
AV_CODEC_ID_UTVIDEO = 154,
AV_CODEC_ID_BMV_VIDEO = 155,
AV_CODEC_ID_VBLE = 156,
AV_CODEC_ID_DXTORY = 157,
AV_CODEC_ID_V410 = 158,
AV_CODEC_ID_XWD = 159,
AV_CODEC_ID_CDXL = 160,
AV_CODEC_ID_XBM = 161,
AV_CODEC_ID_ZEROCODEC = 162,
AV_CODEC_ID_MSS1 = 163,
AV_CODEC_ID_MSA1 = 164,
AV_CODEC_ID_TSCC2 = 165,
AV_CODEC_ID_MTS2 = 166,
AV_CODEC_ID_CLLC = 167,
AV_CODEC_ID_MSS2 = 168,
AV_CODEC_ID_VP9 = 169,
AV_CODEC_ID_AIC = 170,
// etc
PixelFormat can be selected from this list in pixfmt.h/avutil.java, but each codec only supports a few of them (most of them support at least AV_PIX_FMT_YUV420P):
/** planar YUV 4:2:0, 12bpp, (1 Cr & Cb sample per 2x2 Y samples) */
AV_PIX_FMT_YUV420P = 0,
/** packed YUV 4:2:2, 16bpp, Y0 Cb Y1 Cr */
AV_PIX_FMT_YUYV422 = 1,
/** packed RGB 8:8:8, 24bpp, RGBRGB... */
AV_PIX_FMT_RGB24 = 2,
/** packed RGB 8:8:8, 24bpp, BGRBGR... */
AV_PIX_FMT_BGR24 = 3,
/** planar YUV 4:2:2, 16bpp, (1 Cr & Cb sample per 2x1 Y samples) */
AV_PIX_FMT_YUV422P = 4,
/** planar YUV 4:4:4, 24bpp, (1 Cr & Cb sample per 1x1 Y samples) */
AV_PIX_FMT_YUV444P = 5,
/** planar YUV 4:1:0, 9bpp, (1 Cr & Cb sample per 4x4 Y samples) */
AV_PIX_FMT_YUV410P = 6,
/** planar YUV 4:1:1, 12bpp, (1 Cr & Cb sample per 4x1 Y samples) */
AV_PIX_FMT_YUV411P = 7,
/** Y , 8bpp */
AV_PIX_FMT_GRAY8 = 8,
/** Y , 1bpp, 0 is white, 1 is black, in each byte pixels are ordered from the msb to the lsb */
AV_PIX_FMT_MONOWHITE = 9,
/** Y , 1bpp, 0 is black, 1 is white, in each byte pixels are ordered from the msb to the lsb */
AV_PIX_FMT_MONOBLACK = 10,
/** 8 bit with PIX_FMT_RGB32 palette */
AV_PIX_FMT_PAL8 = 11,
/** planar YUV 4:2:0, 12bpp, full scale (JPEG), deprecated in favor of PIX_FMT_YUV420P and setting color_range */
AV_PIX_FMT_YUVJ420P = 12,
/** planar YUV 4:2:2, 16bpp, full scale (JPEG), deprecated in favor of PIX_FMT_YUV422P and setting color_range */
AV_PIX_FMT_YUVJ422P = 13,
/** planar YUV 4:4:4, 24bpp, full scale (JPEG), deprecated in favor of PIX_FMT_YUV444P and setting color_range */
AV_PIX_FMT_YUVJ444P = 14,
/** XVideo Motion Acceleration via common packet passing */
AV_PIX_FMT_XVMC_MPEG2_MC = 15,
AV_PIX_FMT_XVMC_MPEG2_IDCT = 16;
/** packed YUV 4:2:2, 16bpp, Cb Y0 Cr Y1 */
AV_PIX_FMT_UYVY422 = 17,
/** packed YUV 4:1:1, 12bpp, Cb Y0 Y1 Cr Y2 Y3 */
AV_PIX_FMT_UYYVYY411 = 18,
/** packed RGB 3:3:2, 8bpp, (msb)2B 3G 3R(lsb) */
AV_PIX_FMT_BGR8 = 19,
/** packed RGB 1:2:1 bitstream, 4bpp, (msb)1B 2G 1R(lsb), a byte contains two pixels, the first pixel in the byte is the one composed by the 4 msb bits */
AV_PIX_FMT_BGR4 = 20,
/** packed RGB 1:2:1, 8bpp, (msb)1B 2G 1R(lsb) */
AV_PIX_FMT_BGR4_BYTE = 21,
/** packed RGB 3:3:2, 8bpp, (msb)2R 3G 3B(lsb) */
AV_PIX_FMT_RGB8 = 22,
/** packed RGB 1:2:1 bitstream, 4bpp, (msb)1R 2G 1B(lsb), a byte contains two pixels, the first pixel in the byte is the one composed by the 4 msb bits */
AV_PIX_FMT_RGB4 = 23,
/** packed RGB 1:2:1, 8bpp, (msb)1R 2G 1B(lsb) */
AV_PIX_FMT_RGB4_BYTE = 24,
/** planar YUV 4:2:0, 12bpp, 1 plane for Y and 1 plane for the UV components, which are interleaved (first byte U and the following byte V) */
AV_PIX_FMT_NV12 = 25,
/** as above, but U and V bytes are swapped */
AV_PIX_FMT_NV21 = 26,
/** packed ARGB 8:8:8:8, 32bpp, ARGBARGB... */
AV_PIX_FMT_ARGB = 27,
/** packed RGBA 8:8:8:8, 32bpp, RGBARGBA... */
AV_PIX_FMT_RGBA = 28,
/** packed ABGR 8:8:8:8, 32bpp, ABGRABGR... */
AV_PIX_FMT_ABGR = 29,
/** packed BGRA 8:8:8:8, 32bpp, BGRABGRA... */
AV_PIX_FMT_BGRA = 30,
/** Y , 16bpp, big-endian */
AV_PIX_FMT_GRAY16BE = 31,
/** Y , 16bpp, little-endian */
AV_PIX_FMT_GRAY16LE = 32,
/** planar YUV 4:4:0 (1 Cr & Cb sample per 1x2 Y samples) */
AV_PIX_FMT_YUV440P = 33,
/** planar YUV 4:4:0 full scale (JPEG), deprecated in favor of PIX_FMT_YUV440P and setting color_range */
AV_PIX_FMT_YUVJ440P = 34,
/** planar YUV 4:2:0, 20bpp, (1 Cr & Cb sample per 2x2 Y & A samples) */
AV_PIX_FMT_YUVA420P = 35,
/** H.264 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_H264 = 36,
/** MPEG-1 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_MPEG1 = 37,
/** MPEG-2 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_MPEG2 = 38,
/** WMV3 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_WMV3 = 39,
/** VC-1 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_VC1 = 40,
/** packed RGB 16:16:16, 48bpp, 16R, 16G, 16B, the 2-byte value for each R/G/B component is stored as big-endian */
AV_PIX_FMT_RGB48BE = 41,
/** packed RGB 16:16:16, 48bpp, 16R, 16G, 16B, the 2-byte value for each R/G/B component is stored as little-endian */
AV_PIX_FMT_RGB48LE = 42,
/** packed RGB 5:6:5, 16bpp, (msb) 5R 6G 5B(lsb), big-endian */
AV_PIX_FMT_RGB565BE = 43,
/** packed RGB 5:6:5, 16bpp, (msb) 5R 6G 5B(lsb), little-endian */
AV_PIX_FMT_RGB565LE = 44,
/** packed RGB 5:5:5, 16bpp, (msb)1A 5R 5G 5B(lsb), big-endian, most significant bit to 0 */
AV_PIX_FMT_RGB555BE = 45,
/** packed RGB 5:5:5, 16bpp, (msb)1A 5R 5G 5B(lsb), little-endian, most significant bit to 0 */
AV_PIX_FMT_RGB555LE = 46,
/** packed BGR 5:6:5, 16bpp, (msb) 5B 6G 5R(lsb), big-endian */
AV_PIX_FMT_BGR565BE = 47,
/** packed BGR 5:6:5, 16bpp, (msb) 5B 6G 5R(lsb), little-endian */
AV_PIX_FMT_BGR565LE = 48,
/** packed BGR 5:5:5, 16bpp, (msb)1A 5B 5G 5R(lsb), big-endian, most significant bit to 1 */
AV_PIX_FMT_BGR555BE = 49,
/** packed BGR 5:5:5, 16bpp, (msb)1A 5B 5G 5R(lsb), little-endian, most significant bit to 1 */
AV_PIX_FMT_BGR555LE = 50,
/** HW acceleration through VA API at motion compensation entry-point, Picture.data[3] contains a vaapi_render_state struct which contains macroblocks as well as various fields extracted from headers */
AV_PIX_FMT_VAAPI_MOCO = 51,
/** HW acceleration through VA API at IDCT entry-point, Picture.data[3] contains a vaapi_render_state struct which contains fields extracted from headers */
AV_PIX_FMT_VAAPI_IDCT = 52,
/** HW decoding through VA API, Picture.data[3] contains a vaapi_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VAAPI_VLD = 53,
/** planar YUV 4:2:0, 24bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian */
AV_PIX_FMT_YUV420P16LE = 54,
/** planar YUV 4:2:0, 24bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian */
AV_PIX_FMT_YUV420P16BE = 55,
/** planar YUV 4:2:2, 32bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian */
AV_PIX_FMT_YUV422P16LE = 56,
/** planar YUV 4:2:2, 32bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian */
AV_PIX_FMT_YUV422P16BE = 57,
/** planar YUV 4:4:4, 48bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian */
AV_PIX_FMT_YUV444P16LE = 58,
/** planar YUV 4:4:4, 48bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian */
AV_PIX_FMT_YUV444P16BE = 59,
/** MPEG4 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_MPEG4 = 60,
/** HW decoding through DXVA2, Picture.data[3] contains a LPDIRECT3DSURFACE9 pointer */
AV_PIX_FMT_DXVA2_VLD = 61,
/** packed RGB 4:4:4, 16bpp, (msb)4A 4R 4G 4B(lsb), little-endian, most significant bits to 0 */
AV_PIX_FMT_RGB444LE = 62,
/** packed RGB 4:4:4, 16bpp, (msb)4A 4R 4G 4B(lsb), big-endian, most significant bits to 0 */
AV_PIX_FMT_RGB444BE = 63,
/** packed BGR 4:4:4, 16bpp, (msb)4A 4B 4G 4R(lsb), little-endian, most significant bits to 1 */
AV_PIX_FMT_BGR444LE = 64,
/** packed BGR 4:4:4, 16bpp, (msb)4A 4B 4G 4R(lsb), big-endian, most significant bits to 1 */
AV_PIX_FMT_BGR444BE = 65,
/** 8bit gray, 8bit alpha */
AV_PIX_FMT_YA8 = 66,
// etc
FrameRate indicates the number of frames per second the video should be played back at (it has nothing to do with the number or the timing of images you actually record, although it provides a basis for the encoding bitrate). So, in the case of 30 FPS, to cover 20 seconds of video, you need to call record() 30 * 20 = 600 times. If you do no call record() 600 times, then this is the cause of your problem.
VideoBitrate provides the video bitrate (in bits per second) at which the video stream should be encoded at. Wikipedia has a nice article about that.

Resources