Someone knows an algorithm that gets temperatue in Kelvin/Celsius and returns RGB?
Like in thermal cameras.
I found some links :
http://www.brucelindbloom.com/index.html?Eqn_XYZ_to_T.html
http://www.fourmilab.ch/documents/specrend/specrend.c
But i cant figure what XYZ color is ?
I only have temperature in Celsius..
i can convert it to any temperature Temperature Conversion Formulas
UPDATE:
Blackbody Color Datafile
I have found this.. but those Kelvin degrees are impossible.. i mean
red suppose to be hot.. so why 8000k is blue and 1000k is red...
The best option is to use an image with GetPixel:
private void UpdateTemp()
{
Bitmap temps = (Bitmap)Properties.Resources.temp;
if (curTemp >= 0)
{
int i = curTemp;
if (i < 0)
i = 0;
if (i > temps.Width-1)
i = temps.Width-1;
this.BackColor = temps.GetPixel(i, 10);
}
}
Or building an array. Source
private static Color[] colors =
{
Color.FromArgb(155, 188, 255), // 40000
Color.FromArgb(155, 188, 255), // 39500
Color.FromArgb(155, 188, 255), // 39000
Color.FromArgb(155, 188, 255), // 38500
Color.FromArgb(156, 188, 255), // 38000
Color.FromArgb(156, 188, 255), // 37500
Color.FromArgb(156, 189, 255), // 37000
Color.FromArgb(156, 189, 255), // 36500
Color.FromArgb(156, 189, 255), // 36000
Color.FromArgb(157, 189, 255), // 35500
Color.FromArgb(157, 189, 255), // 35000
Color.FromArgb(157, 189, 255), // 34500
Color.FromArgb(157, 189, 255), // 34000
Color.FromArgb(157, 189, 255), // 33500
Color.FromArgb(158, 190, 255), // 33000
Color.FromArgb(158, 190, 255), // 32500
Color.FromArgb(158, 190, 255), // 32000
Color.FromArgb(158, 190, 255), // 31500
Color.FromArgb(159, 190, 255), // 31000
Color.FromArgb(159, 190, 255), // 30500
Color.FromArgb(159, 191, 255), // 30000
Color.FromArgb(159, 191, 255), // 29500
Color.FromArgb(160, 191, 255), // 29000
Color.FromArgb(160, 191, 255), // 28500
Color.FromArgb(160, 191, 255), // 28000
Color.FromArgb(161, 192, 255), // 27500
Color.FromArgb(161, 192, 255), // 27000
Color.FromArgb(161, 192, 255), // 26500
Color.FromArgb(162, 192, 255), // 26000
Color.FromArgb(162, 193, 255), // 25500
Color.FromArgb(163, 193, 255), // 25000
Color.FromArgb(163, 193, 255), // 24500
Color.FromArgb(163, 194, 255), // 24000
Color.FromArgb(164, 194, 255), // 23500
Color.FromArgb(164, 194, 255), // 23000
Color.FromArgb(165, 195, 255), // 22500
Color.FromArgb(166, 195, 255), // 22000
Color.FromArgb(166, 195, 255), // 21500
Color.FromArgb(167, 196, 255), // 21000
Color.FromArgb(168, 196, 255), // 20500
Color.FromArgb(168, 197, 255), // 20000
Color.FromArgb(169, 197, 255), // 19500
Color.FromArgb(170, 198, 255), // 19000
Color.FromArgb(171, 198, 255), // 18500
Color.FromArgb(172, 199, 255), // 18000
Color.FromArgb(173, 200, 255), // 17500
Color.FromArgb(174, 200, 255), // 17000
Color.FromArgb(175, 201, 255), // 16500
Color.FromArgb(176, 202, 255), // 16000
Color.FromArgb(177, 203, 255), // 15500
Color.FromArgb(179, 204, 255), // 15000
Color.FromArgb(180, 205, 255), // 14500
Color.FromArgb(182, 206, 255), // 14000
Color.FromArgb(184, 207, 255), // 13500
Color.FromArgb(186, 208, 255), // 13000
Color.FromArgb(188, 210, 255), // 12500
Color.FromArgb(191, 211, 255), // 12000
Color.FromArgb(193, 213, 255), // 11500
Color.FromArgb(196, 215, 255), // 11000
Color.FromArgb(200, 217, 255), // 10500
Color.FromArgb(204, 219, 255), // 10000
Color.FromArgb(208, 222, 255), // 9500
Color.FromArgb(214, 225, 255), // 9000
Color.FromArgb(220, 229, 255), // 8500
Color.FromArgb(227, 233, 255), // 8000
Color.FromArgb(235, 238, 255), // 7500
Color.FromArgb(245, 243, 255), // 7000
Color.FromArgb(255, 249, 253), // 6500
Color.FromArgb(255, 243, 239), // 6000
Color.FromArgb(255, 236, 224), // 5500
Color.FromArgb(255, 228, 206), // 5000
Color.FromArgb(255, 219, 186), // 4500
Color.FromArgb(255, 209, 163), // 4000
Color.FromArgb(255, 196, 137), // 3500
Color.FromArgb(255, 180, 107), // 3000
Color.FromArgb(255, 161, 72), // 2500
Color.FromArgb(255, 137, 18), // 2000
Color.FromArgb(255, 109, 0), // 1500
Color.FromArgb(255, 51, 0), // 1000
};
I realize this is a two-year old thread, but I had the same predicament.
I took the data from the color table and did applied piece-wise 5th order polynomial fitting using Numpy.polyfit in Python. From those coefficients I was able to come up with the C# function below. R-squared values for the fits are close to or exceed 0.999. It has less than .01% error through most of its domain, but it does have a couple of points where it is closer to 3%. Should be good enough for most situations though.
private Color blackBodyColor(double temp)
{
float x = (float)(temp / 1000.0);
float x2 = x * x;
float x3 = x2 * x;
float x4 = x3 * x;
float x5 = x4 * x;
float R, G, B = 0f;
// red
if (temp <= 6600)
R = 1f;
else
R = 0.0002889f * x5 - 0.01258f * x4 + 0.2148f * x3 - 1.776f * x2 + 6.907f * x - 8.723f;
// green
if (temp <= 6600)
G = -4.593e-05f * x5 + 0.001424f * x4 - 0.01489f * x3 + 0.0498f * x2 + 0.1669f * x - 0.1653f;
else
G = -1.308e-07f * x5 + 1.745e-05f * x4 - 0.0009116f * x3 + 0.02348f * x2 - 0.3048f * x + 2.159f;
// blue
if (temp <= 2000f)
B = 0f;
else if (temp < 6600f)
B = 1.764e-05f * x5 + 0.0003575f * x4 - 0.01554f * x3 + 0.1549f * x2 - 0.3682f * x + 0.2386f;
else
B = 1f;
return Color.FromScRgb(1f, R, G, B);
}
If get you right you are looking for a theoretical background on XYZ color space
Color temperature is based on the actual color of light emitted from something (theoretically, an "ideal black body") that emits light based solely on its temperature.
Some examples of this kind of light source: if you have an electric stove element that is glowing red, it might be around 1000K. A regular incandescent bulb filament is around 2700K, and the sun is roughly 5700K. All three are fair approximations of a "black body"; they emit a particular spectrum of light based on their actual temperature.
Many artificial light sources are not actually the "temperature" of the light they emit (and their spectra are generally not black-body spectra, either...). Instead, their "temperature" rating is the temperature a theoretical black body would have to be in order to emit light of that color. There are also colors that cannot be generated by a black body: light which is greenish or purplish compared to a more "natural"-looking black body illumination.
As mentioned in one of the comments, the kind of thermal camera displays you are probably thinking about are all false-color. In a false-color display, the colors are chosen for convenience only: so, for a thermal camera they might choose a "hot"-looking red for warm, and "cold"-looking blue for cold. But, they could just as easily choose a range from black to white, or fuschia to green.
Because false-color displays are arbitrary, you really need to check the color key to a particular image or system if you want to estimate the temperature (scientific images should generally have some kind of color key for this purpose). If you have no color key, and no documentation on how the image was generated, you are out of luck.
The above function overestimate red color when temp > 10000 K. Colors turn to purple when temp>14000. I refitted the data with 7th order polynomials. The coefficients of should be:
def temp_to_rgb(temp):
t = temp/1000.
# calculate red
if t < 6.527:
red = 1.0
else:
coeffs = [ 4.93596077e+00, -1.29917429e+00,
1.64810386e-01, -1.16449912e-02,
4.86540872e-04, -1.19453511e-05,
1.59255189e-07, -8.89357601e-10]
tt = min(t,40)
red = poly(coeffs,tt)
red = max(red,0)
red = min(red,1)
# calcuate green
if t < 0.85:
green = 0.0
elif t < 6.6:
coeffs = [ -4.95931720e-01, 1.08442658e+00,
-9.17444217e-01, 4.94501179e-01,
-1.48487675e-01, 2.49910386e-02,
-2.21528530e-03, 8.06118266e-05]
green = poly(coeffs,t)
else:
coeffs = [ 3.06119745e+00, -6.76337896e-01,
8.28276286e-02, -5.72828699e-03,
2.35931130e-04, -5.73391101e-06,
7.58711054e-08, -4.21266737e-10]
tt = min(t,40)
green = poly(coeffs,tt)
green = max(green,0)
green = min(green,1)
# calculate blue
if t < 1.9:
blue = 0.0
elif t < 6.6:
coeffs = [ 4.93997706e-01, -8.59349314e-01,
5.45514949e-01, -1.81694167e-01,
4.16704799e-02, -6.01602324e-03,
4.80731598e-04, -1.61366693e-05]
blue = poly(coeffs,t)
else:
blue = 1.0
blue = max(blue,0)
blue = min(blue,1)
return (red,green,blue)
Here poly(coeffs,x) = coeffs[0] + coeffs[1]*x + coeffs[2]*x**2 + ...
Sorry I am not familiar with C# but you can easily read the codes.
The error is within only 0.5% for most cases and at most 1.2% for red in temp = 6600 K. High order polynomials are adopted here so red and green must keep constant for temp > 40000 K, otherwise strange things will happen.
Related
Im making this game where you can collect pokemon and when you run into a pokeball a new page shows up and shows the pokemon you found and you can click anywhere to continue. I always have to click about 10 times for it to actually work
I thought it may be because the page is in the draw function but when I took it out the draw function it didnt work. Also I dont have this problem with the start page and that is in the draw function
I also have a problem that it sometimes falls through the block when I finish collecting a pokemon but that one should be a little easier to figure out.
https://openprocessing.org/sketch/1786189
function preload() {
sprite = loadImage('sprite1.png')
lava = loadImage('lava3.png')
pokeball1 = loadImage('pokeball.png')
pokeball2 = loadImage('pokeball.png')
pokeball3 = loadImage('pokeball.png')
pokeball4 = loadImage('pokeball.png')
pokeball5 = loadImage('pokeball.png')
pichu = loadImage('0172Pichu.png')
shinx = loadImage('shinximage.png')
wooper = loadImage('194.png')
hoppip = loadImage('Hoppip-Pokemon-PNG-Image.png')
rowlet = loadImage('5859604c4f6ae202fedf2854.png')
srowlet = loadImage('ddhnzm5-18daf778-cd90-49bf-bcc3-3cd51b9c7d02.png')
gymBadges = loadImage('gymbadges.webp')
}
function setup() {
createCanvas(700, 700);
background(100);
/*start = createButton('START')
start.size(400,80)
start.position(350,620)*/
background(194, 220, 255)
//frameRate(30)
/*page1 = createButton('X');
page1.position(10,30);
page1.size(60,60)
page1.style('background-color',255 )
page1.style('border-radius', '10px')
page1.style('font-size', '30px')
page1.mousePressed(back)*/
}
let page = 0
let x = 10
//let y = 300
let y =580
function draw() {
let bcolor = [255, 203, 164, 255];
//let bcolor = [0,0,0,255]
let rightColor = get(x + 47, y + 22.5)
let leftColor = get(x - 3, y + 22.5)
//let downColor = get(x + 23, 350)
let downColor = get(x + 23, y + 47)
let upColor = get(x + 23, y + 50)
let shinyOrNo = int(random(4))
//strokeWeight(10)
//point(x-3,y+22.5)
if (page == 0) {
fill(128, 157, 255)
rect(150, 300, 400, 100, 5)
fill(255)
textSize(60)
text('START', 240, 370)
textSize(28)
text('use arrow keys to play', 200, 450)
}
if (page == 1) {
background(224, 209, 255)
fill(266)
ellipse(x, y + 22.5, 5, 5)
stroke(0)
strokeWeight(5)
//sky
fill(210, 235, 249)
rect(0, 0, 700, 700, 10)
//grass
noStroke()
fill(148, 244, 176)
rect(3, 630, 694, 67)
//obstacles
stroke(0)
strokeWeight(2)
fill(255, 203, 164)
//fill(0)
//noStroke()
rect(70, 610, 45, 20, 2)
rect(140, 590, 45, 40, 2)
rect(210, 570, 45, 60, 2)
rect(278, 540, 290, 25, 2)
rect(600, 530, 40, 25, 2)
//rect(610, 500, 40, 20, 2)
rect(500, 470, 60, 25, 2)
rect(430, 410, 50, 30, 2)
rect(70, 370, 113, 25, 2)
rect(200, 370, 60, 25, 2)
rect(280, 370, 100, 25, 2)
rect(15, 330, 30, 25, 2)
rect(100, 290, 30, 25, 2)
rect(15, 250, 30, 25, 2)
rect(100, 210, 30, 25, 2)
rect(175, 165, 30, 25, 2)
rect(260, 130, 270, 25, 2)
rect(560, 100, 100, 25, 2)
fill(134, 89, 248)
triangle(210, 360, 200, 370, 220, 370)
triangle(230, 360, 220, 370, 240, 370)
triangle(250, 360, 240, 370, 260, 370)
image(lava, 310, 488, 70, 70)
image(pokeball1, 222, 540, 23, 23)
image(pokeball2, 520, 441, 23, 23)
image(pokeball3, 150, 340, 23, 23)
image(pokeball4, 18, 220, 23, 23)
image(pokeball5, 420, 100, 23, 23)
rect(620, 50, 30, 16)
line(650, 66, 650, 99)
//sprite
//print (y)
if (eqColor(downColor, bcolor)) {
} else {
y = y +2
}
//y=y+2
image(sprite, x, y, 45, 45)
if (keyIsPressed) {
if (keyIsDown(RIGHT_ARROW)) {
if (eqColor(rightColor, bcolor)) {
} else {
x = x + 2
}
} else if (keyIsDown(LEFT_ARROW)) {
if (eqColor(leftColor, bcolor)) {
} else {
x = x - 2
}
}
if (keyIsDown(UP_ARROW)) {
if(eqColor(upColor,bcolor)) {
}else{
y=y-6
}
}
} //end of keyispressed
//jump
//if (y = y - 6) {
//y = y + 9
//}
//ground
if (y > 590) {
y = 591
}
//walk off screen
if (x < 0 || x > 700) {
x = 0
} else if (x > 655) {
x = 655
} else if (y < 0) {
y = 0
}
//collisons on the top
/*if (x>60 && x<90) {
y = 565
}*/
//pokeball 1
if (x > 189 && x < 218 && y > 512 && y < 535) {
page = 2
}
if (page == 2) {
fill(192, 176, 232)
rect(0, 0, 700, 700)
image(pichu, 230, 200, 300, 300)
fill(0)
strokeWeight(1)
text('click anywhere to continue', 200, 600)
text('You found Pichu!!', 250, 150)
pokeball1 =loadImage('640px-HD_transparent_picture.png')
}
//print(x,y)
//pokeball 2
if (x > 505 && x < 533 && y > 420 && y < 444) {
page = 3
}
if (page == 3) {
fill(192, 176, 232)
rect(0, 0, 700, 700)
image(shinx, 150, 200, 450, 300)
fill(0)
strokeWeight(1)
text('You found Shinx!!', 250, 150)
textSize(19)
text('fun fact: shinx is micaiah\'s favorite pokemon', 180,530)
textSize(28)
text('click anywhere to continue', 200, 600)
pokeball2 =loadImage('640px-HD_transparent_picture.png')
}
//pokeball 3
if (x > 114 && x < 145 && y > 302 && y < 350) {
page = 4
}
if (page == 4) {
fill(192, 176, 232)
rect(0, 0, 700, 700)
image(wooper, 230, 200, 300, 300)
fill(0)
strokeWeight(1)
text('click anywhere to continue', 200, 600)
text('You found Wooper!!', 250, 150)
pokeball3 =loadImage('640px-HD_transparent_picture.png')
}
//pokeball 4
if (x > 0 && x < 25 && y > 200 && y < 225 && shinyOrNo == 1) {
page = 5
}
if (x > 0 && x < 25 && y > 200 && y < 225 && shinyOrNo == 2) {
page = 5
}
if (page == 5) {
fill(192, 176, 232)
rect(0, 0, 700, 700)
image(rowlet,230,200,300,300)
fill(0)
strokeWeight(1)
text('click anywhere to continue', 200, 600)
text('You found rowlet!!', 250,150)
pokeball4 =loadImage('640px-HD_transparent_picture.png')
}
if (x > 0 && x < 25 && y > 200 && y < 225 && shinyOrNo == 3) {
page = 7
}
if (page == 7) {
fill(192, 176, 232)
rect(0, 0, 700, 700)
image(srowlet,230,200,300,300)
fill(0)
strokeWeight(1)
text('click anywhere to continue', 200, 600)
text('You found a shiny rowlet!!', 250,150)
pokeball4 =loadImage('640px-HD_transparent_picture.png')
}
//print(page)
//pokeball 5
if (x > 382 && x < 415 && y > 84 && y < 115) {
page = 6
}
if (page == 6) {
fill(192, 176, 232)
rect(0, 0, 700, 700)
image(hoppip, 230, 200, 300, 300)
fill(0)
strokeWeight(1)
text('click anywhere to continue', 200, 600)
text('You found Hoppip!!', 250, 150)
pokeball5 =loadImage('640px-HD_transparent_picture.png')
}
//lava
/*if (x > 285 && x < 355 && y > 454 ) {
x = 10
y = 580
}*/
point(322,502)
//flag
if (x > 595 && x < 620 && y < 63) {
page = 8
}
if (page == 8) {
fill(192, 176, 232)
rect(0, 0, 700, 700)
fill(0)
strokeWeight(1)
text('your prize is all of the pokemon gym badges',100,200)
image(gymBadges, 160,200,430,300)
text('You finished the game!!', 220, 150)
}
//spikes
if (x > 200 && x < 250 && y > 323 && y<375) {
x = 10
y = 580
}
/*if(y>330 && y<335){
y=331
}*/
} //end of page 1
//print(x,y)
} //end of draw
function mousePressed() {
if (page ==0 && mouseX >= 150 && mouseX <= 550 && mouseY >= 300 && mouseY <= 400) {
page = 1
}
if (page>1) {
page = 1
print(true)
}
}
function eqColor(a, b) {
return a[0] == b[0] && a[1] == b[1] &&
a[2] == b[2] && a[3] == b[3];
}
Is it possible to fill a shape in with multiple colors/patterns if they are intersecting with an underlying shape or intersected with a line?
For example, we can see that the square has two different fills in the top-left half and bottom-right half.
One option would be to use mask().
This works on both PImage and PGraphics instances.
If you have an image of the top left diagonals (let's call it diagonals) and another of the green grid (let's call it grid), then you simply need to apply a masks (let's call it gridMask) one of the images, for example, the lower right triangle:
grid.mask(gridMask);
What's what will be be revealed, what's black will be masked.
You could use the same principle with PGraphics:
size(300, 300);
PGraphics diagonalsLayer = createGraphics(width, height);
diagonalsLayer.beginDraw();
diagonalsLayer.strokeWeight(3);
diagonalsLayer.fill(#114866);
diagonalsLayer.triangle(0, 0, width, 0, 0, height * 0.5);
diagonalsLayer.fill(#FFFFFF);
diagonalsLayer.triangle(width, 0, 0, height * 0.5, 0, height);
diagonalsLayer.fill(0);
diagonalsLayer.rectMode(CENTER);
diagonalsLayer.rect(width * 0.5, height * 0.5, 75, 75, 30, 30, 30, 30);
diagonalsLayer.endDraw();
PGraphics gridLayer = createGraphics(width, height);
gridLayer.beginDraw();
gridLayer.background(#d1e5f0);
gridLayer.stroke(#127e6c);
gridLayer.strokeWeight(3);
int gridSize = width / 5;
for(int i = 0; i < 5; i++){
gridLayer.line(gridSize * i, 0, gridSize * i, height);
gridLayer.line(0, gridSize * i, width, gridSize * i);
}
gridLayer.rectMode(CENTER);
gridLayer.noFill();
gridLayer.stroke(#000000);
gridLayer.rect(width * 0.5, height * 0.5, 75, 75, 30, 30, 30, 30);
gridLayer.endDraw();
PGraphics gridMask = createGraphics(width, height);
gridMask.beginDraw();
gridMask.background(0);
gridMask.noStroke();
gridMask.triangle(width, 0, width, height, 0, height);
gridMask.endDraw();
gridLayer.mask(gridMask);
image(diagonalsLayer, 0, 0);
image(gridLayer, 0, 0);
I am distorting an image using ThinPlateSplineShapeTransformer from OpenCV 3.4.2 in C++. Separately, using the same object, I want to distort 3 points from the initial image in the destination image. To visualize the transformation I use 3 triangles:
green: the triangle drawn on the original image. This one will be
distorted with the image.
blue: the reference triangle (initial coordinates)
red: the triangle after the distorting the points
Original image:
Original image augmented with a triangle for reference:
Distorted image + separate point transformation, red triangle should overlap the green one. Blue is the initial one.
Code:
void transform()
{
Mat img = imread("test.jpg"); // the posted original image
auto tps = cv::createThinPlateSplineShapeTransformer();
std::vector<cv::Point2f> sourcePoints, targetPoints, myPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
targetPoints.push_back(cv::Point2f(100, 0));
sourcePoints.push_back(cv::Point2f(650, 40));
targetPoints.push_back(cv::Point2f(500, 0));
sourcePoints.push_back(cv::Point2f(0, 599));
targetPoints.push_back(cv::Point2f(0, 450));
sourcePoints.push_back(cv::Point2f(799, 599));
targetPoints.push_back(cv::Point2f(600, 599));
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
tps->estimateTransformation(sourcePoints, targetPoints, matches);
std::vector<cv::Point2f> transPoints, transPoints2;
//======== draw test points
myPoints.push_back(Point2f(100, 100));
myPoints.push_back(Point2f(200, 200));
myPoints.push_back(Point2f(100, 400));
line(img, myPoints[0], myPoints[1], Scalar(0, 255, 0), 3);
line(img, myPoints[1], myPoints[2], Scalar(0, 255, 0), 3);
line(img, myPoints[2], myPoints[0], Scalar(0, 255, 0), 3);
//========= warp image
Mat img2 = img.clone();
tps->warpImage(img, img2);
//========= warp points
tps->applyTransformation(myPoints, transPoints);
//tps->applyTransformation(transPoints2, transPoints);
line(img2, transPoints[0], transPoints[1], Scalar(0, 0, 255), 3);
line(img2, transPoints[1], transPoints[2], Scalar(0, 0, 255), 3);
line(img2, transPoints[2], transPoints[0], Scalar(0, 0, 255), 3);
//========== draw reference points
line(img2, myPoints[0], myPoints[1], Scalar(255, 0, 0), 3);
line(img2, myPoints[1], myPoints[2], Scalar(255, 0, 0), 3);
line(img2, myPoints[2], myPoints[0], Scalar(255, 0, 0), 3);
imshow("img", img);
imshow("img2", img2);
get_test_contur();
waitKey(0);
}
I don't understand why the result (red triangle) doesn't overlap the green triangle. What am I missing?
I would like to plot a series of ellipses along a bezier path but I am struggling to plot anything more than just the line of the path. I don't need it to move at all. So far I have:
void setup() {
size(150, 150);
background(255);
smooth();
// Don't show where control points are
noFill();
stroke(0);
beginShape();
vertex(50, 75); // first point
bezierVertex(25, 25, 125, 25, 100, 75);
endShape();
}
How do I plot ellipses to follow the bezier path instead of the line?
Why would you expect that code to draw circles? It doesn't contain any calls to the ellipse() function.
Anyway, it sounds like you're looking for the bezierPoint() function:
noFill();
bezier(85, 20, 10, 10, 90, 90, 15, 80);
fill(255);
int steps = 10;
for (int i = 0; i <= steps; i++) {
float t = i / float(steps);
float x = bezierPoint(85, 10, 90, 15, t);
float y = bezierPoint(20, 10, 90, 80, t);
ellipse(x, y, 5, 5);
}
(source: processing.org)
As always, more info can be found in the reference.
I'm using FFmpegFrameRecorder to get the video input from my webcam and record it into a video file. The problem is that I'm building my application using a few different demo source codes that I found and I use properties some of which are not completely clear to me.
First, here is my code snippet :
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(FILENAME, grabber.getImageWidth(),grabber.getImageHeight());
recorder.setVideoCodec(13);
recorder.setFormat("mp4");
recorder.setPixelFormat(avutil.PIX_FMT_YUV420P);
recorder.setFrameRate(30);
recorder.setVideoBitrate(10 * 1024 * 1024);
recorder.start();
setVideoCodec(13) - What is the meaning of this (13) how can I understand what actual codec stands behind any number?
setPixelFormat - Just get this, don't know what it's doing in general
setFrameRate(30) - I think this should be pretty clear but still what is the logic behind what frame rate we choose (isn't the high the better?)
setVideoBitrate(10*1024*1024) - again almost no idea what this does and what's the logic behind the numbers?
At the end I just want to mention one last problem that I get recording video like this. If the actual length of the video is let's say 20secs. When I play the video file created from the program it runs significantly faster. Can't tell if it's exactly 2 times faster than it should be but in general if I record a 20sec video then it's played for about 10secs. What may cause this and how can I fix it?
VideoCodec can be chosen from this list found in avcodec.h/avcodec.java (As you can see, the number 13 gets us MPEG4, and there are others, but FFmpeg doesn't have an encoder for all of them):
AV_CODEC_ID_MPEG1VIDEO = 1,
/** preferred ID for MPEG-1/2 video decoding */
AV_CODEC_ID_MPEG2VIDEO = 2,
AV_CODEC_ID_MPEG2VIDEO_XVMC = 3,
AV_CODEC_ID_H261 = 4,
AV_CODEC_ID_H263 = 5,
AV_CODEC_ID_RV10 = 6,
AV_CODEC_ID_RV20 = 7,
AV_CODEC_ID_MJPEG = 8,
AV_CODEC_ID_MJPEGB = 9,
AV_CODEC_ID_LJPEG = 10,
AV_CODEC_ID_SP5X = 11,
AV_CODEC_ID_JPEGLS = 12,
AV_CODEC_ID_MPEG4 = 13,
AV_CODEC_ID_RAWVIDEO = 14,
AV_CODEC_ID_MSMPEG4V1 = 15,
AV_CODEC_ID_MSMPEG4V2 = 16,
AV_CODEC_ID_MSMPEG4V3 = 17,
AV_CODEC_ID_WMV1 = 18,
AV_CODEC_ID_WMV2 = 19,
AV_CODEC_ID_H263P = 20,
AV_CODEC_ID_H263I = 21,
AV_CODEC_ID_FLV1 = 22,
AV_CODEC_ID_SVQ1 = 23,
AV_CODEC_ID_SVQ3 = 24,
AV_CODEC_ID_DVVIDEO = 25,
AV_CODEC_ID_HUFFYUV = 26,
AV_CODEC_ID_CYUV = 27,
AV_CODEC_ID_H264 = 28,
AV_CODEC_ID_INDEO3 = 29,
AV_CODEC_ID_VP3 = 30,
AV_CODEC_ID_THEORA = 31,
AV_CODEC_ID_ASV1 = 32,
AV_CODEC_ID_ASV2 = 33,
AV_CODEC_ID_FFV1 = 34,
AV_CODEC_ID_4XM = 35,
AV_CODEC_ID_VCR1 = 36,
AV_CODEC_ID_CLJR = 37,
AV_CODEC_ID_MDEC = 38,
AV_CODEC_ID_ROQ = 39,
AV_CODEC_ID_INTERPLAY_VIDEO = 40,
AV_CODEC_ID_XAN_WC3 = 41,
AV_CODEC_ID_XAN_WC4 = 42,
AV_CODEC_ID_RPZA = 43,
AV_CODEC_ID_CINEPAK = 44,
AV_CODEC_ID_WS_VQA = 45,
AV_CODEC_ID_MSRLE = 46,
AV_CODEC_ID_MSVIDEO1 = 47,
AV_CODEC_ID_IDCIN = 48,
AV_CODEC_ID_8BPS = 49,
AV_CODEC_ID_SMC = 50,
AV_CODEC_ID_FLIC = 51,
AV_CODEC_ID_TRUEMOTION1 = 52,
AV_CODEC_ID_VMDVIDEO = 53,
AV_CODEC_ID_MSZH = 54,
AV_CODEC_ID_ZLIB = 55,
AV_CODEC_ID_QTRLE = 56,
AV_CODEC_ID_TSCC = 57,
AV_CODEC_ID_ULTI = 58,
AV_CODEC_ID_QDRAW = 59,
AV_CODEC_ID_VIXL = 60,
AV_CODEC_ID_QPEG = 61,
AV_CODEC_ID_PNG = 62,
AV_CODEC_ID_PPM = 63,
AV_CODEC_ID_PBM = 64,
AV_CODEC_ID_PGM = 65,
AV_CODEC_ID_PGMYUV = 66,
AV_CODEC_ID_PAM = 67,
AV_CODEC_ID_FFVHUFF = 68,
AV_CODEC_ID_RV30 = 69,
AV_CODEC_ID_RV40 = 70,
AV_CODEC_ID_VC1 = 71,
AV_CODEC_ID_WMV3 = 72,
AV_CODEC_ID_LOCO = 73,
AV_CODEC_ID_WNV1 = 74,
AV_CODEC_ID_AASC = 75,
AV_CODEC_ID_INDEO2 = 76,
AV_CODEC_ID_FRAPS = 77,
AV_CODEC_ID_TRUEMOTION2 = 78,
AV_CODEC_ID_BMP = 79,
AV_CODEC_ID_CSCD = 80,
AV_CODEC_ID_MMVIDEO = 81,
AV_CODEC_ID_ZMBV = 82,
AV_CODEC_ID_AVS = 83,
AV_CODEC_ID_SMACKVIDEO = 84,
AV_CODEC_ID_NUV = 85,
AV_CODEC_ID_KMVC = 86,
AV_CODEC_ID_FLASHSV = 87,
AV_CODEC_ID_CAVS = 88,
AV_CODEC_ID_JPEG2000 = 89,
AV_CODEC_ID_VMNC = 90,
AV_CODEC_ID_VP5 = 91,
AV_CODEC_ID_VP6 = 92,
AV_CODEC_ID_VP6F = 93,
AV_CODEC_ID_TARGA = 94,
AV_CODEC_ID_DSICINVIDEO = 95,
AV_CODEC_ID_TIERTEXSEQVIDEO = 96,
AV_CODEC_ID_TIFF = 97,
AV_CODEC_ID_GIF = 98,
AV_CODEC_ID_DXA = 99,
AV_CODEC_ID_DNXHD = 100,
AV_CODEC_ID_THP = 101,
AV_CODEC_ID_SGI = 102,
AV_CODEC_ID_C93 = 103,
AV_CODEC_ID_BETHSOFTVID = 104,
AV_CODEC_ID_PTX = 105,
AV_CODEC_ID_TXD = 106,
AV_CODEC_ID_VP6A = 107,
AV_CODEC_ID_AMV = 108,
AV_CODEC_ID_VB = 109,
AV_CODEC_ID_PCX = 110,
AV_CODEC_ID_SUNRAST = 111,
AV_CODEC_ID_INDEO4 = 112,
AV_CODEC_ID_INDEO5 = 113,
AV_CODEC_ID_MIMIC = 114,
AV_CODEC_ID_RL2 = 115,
AV_CODEC_ID_ESCAPE124 = 116,
AV_CODEC_ID_DIRAC = 117,
AV_CODEC_ID_BFI = 118,
AV_CODEC_ID_CMV = 119,
AV_CODEC_ID_MOTIONPIXELS = 120,
AV_CODEC_ID_TGV = 121,
AV_CODEC_ID_TGQ = 122,
AV_CODEC_ID_TQI = 123,
AV_CODEC_ID_AURA = 124,
AV_CODEC_ID_AURA2 = 125,
AV_CODEC_ID_V210X = 126,
AV_CODEC_ID_TMV = 127,
AV_CODEC_ID_V210 = 128,
AV_CODEC_ID_DPX = 129,
AV_CODEC_ID_MAD = 130,
AV_CODEC_ID_FRWU = 131,
AV_CODEC_ID_FLASHSV2 = 132,
AV_CODEC_ID_CDGRAPHICS = 133,
AV_CODEC_ID_R210 = 134,
AV_CODEC_ID_ANM = 135,
AV_CODEC_ID_BINKVIDEO = 136,
AV_CODEC_ID_IFF_ILBM = 137,
AV_CODEC_ID_IFF_BYTERUN1 = 138,
AV_CODEC_ID_KGV1 = 139,
AV_CODEC_ID_YOP = 140,
AV_CODEC_ID_VP8 = 141,
AV_CODEC_ID_PICTOR = 142,
AV_CODEC_ID_ANSI = 143,
AV_CODEC_ID_A64_MULTI = 144,
AV_CODEC_ID_A64_MULTI5 = 145,
AV_CODEC_ID_R10K = 146,
AV_CODEC_ID_MXPEG = 147,
AV_CODEC_ID_LAGARITH = 148,
AV_CODEC_ID_PRORES = 149,
AV_CODEC_ID_JV = 150,
AV_CODEC_ID_DFA = 151,
AV_CODEC_ID_WMV3IMAGE = 152,
AV_CODEC_ID_VC1IMAGE = 153,
AV_CODEC_ID_UTVIDEO = 154,
AV_CODEC_ID_BMV_VIDEO = 155,
AV_CODEC_ID_VBLE = 156,
AV_CODEC_ID_DXTORY = 157,
AV_CODEC_ID_V410 = 158,
AV_CODEC_ID_XWD = 159,
AV_CODEC_ID_CDXL = 160,
AV_CODEC_ID_XBM = 161,
AV_CODEC_ID_ZEROCODEC = 162,
AV_CODEC_ID_MSS1 = 163,
AV_CODEC_ID_MSA1 = 164,
AV_CODEC_ID_TSCC2 = 165,
AV_CODEC_ID_MTS2 = 166,
AV_CODEC_ID_CLLC = 167,
AV_CODEC_ID_MSS2 = 168,
AV_CODEC_ID_VP9 = 169,
AV_CODEC_ID_AIC = 170,
// etc
PixelFormat can be selected from this list in pixfmt.h/avutil.java, but each codec only supports a few of them (most of them support at least AV_PIX_FMT_YUV420P):
/** planar YUV 4:2:0, 12bpp, (1 Cr & Cb sample per 2x2 Y samples) */
AV_PIX_FMT_YUV420P = 0,
/** packed YUV 4:2:2, 16bpp, Y0 Cb Y1 Cr */
AV_PIX_FMT_YUYV422 = 1,
/** packed RGB 8:8:8, 24bpp, RGBRGB... */
AV_PIX_FMT_RGB24 = 2,
/** packed RGB 8:8:8, 24bpp, BGRBGR... */
AV_PIX_FMT_BGR24 = 3,
/** planar YUV 4:2:2, 16bpp, (1 Cr & Cb sample per 2x1 Y samples) */
AV_PIX_FMT_YUV422P = 4,
/** planar YUV 4:4:4, 24bpp, (1 Cr & Cb sample per 1x1 Y samples) */
AV_PIX_FMT_YUV444P = 5,
/** planar YUV 4:1:0, 9bpp, (1 Cr & Cb sample per 4x4 Y samples) */
AV_PIX_FMT_YUV410P = 6,
/** planar YUV 4:1:1, 12bpp, (1 Cr & Cb sample per 4x1 Y samples) */
AV_PIX_FMT_YUV411P = 7,
/** Y , 8bpp */
AV_PIX_FMT_GRAY8 = 8,
/** Y , 1bpp, 0 is white, 1 is black, in each byte pixels are ordered from the msb to the lsb */
AV_PIX_FMT_MONOWHITE = 9,
/** Y , 1bpp, 0 is black, 1 is white, in each byte pixels are ordered from the msb to the lsb */
AV_PIX_FMT_MONOBLACK = 10,
/** 8 bit with PIX_FMT_RGB32 palette */
AV_PIX_FMT_PAL8 = 11,
/** planar YUV 4:2:0, 12bpp, full scale (JPEG), deprecated in favor of PIX_FMT_YUV420P and setting color_range */
AV_PIX_FMT_YUVJ420P = 12,
/** planar YUV 4:2:2, 16bpp, full scale (JPEG), deprecated in favor of PIX_FMT_YUV422P and setting color_range */
AV_PIX_FMT_YUVJ422P = 13,
/** planar YUV 4:4:4, 24bpp, full scale (JPEG), deprecated in favor of PIX_FMT_YUV444P and setting color_range */
AV_PIX_FMT_YUVJ444P = 14,
/** XVideo Motion Acceleration via common packet passing */
AV_PIX_FMT_XVMC_MPEG2_MC = 15,
AV_PIX_FMT_XVMC_MPEG2_IDCT = 16;
/** packed YUV 4:2:2, 16bpp, Cb Y0 Cr Y1 */
AV_PIX_FMT_UYVY422 = 17,
/** packed YUV 4:1:1, 12bpp, Cb Y0 Y1 Cr Y2 Y3 */
AV_PIX_FMT_UYYVYY411 = 18,
/** packed RGB 3:3:2, 8bpp, (msb)2B 3G 3R(lsb) */
AV_PIX_FMT_BGR8 = 19,
/** packed RGB 1:2:1 bitstream, 4bpp, (msb)1B 2G 1R(lsb), a byte contains two pixels, the first pixel in the byte is the one composed by the 4 msb bits */
AV_PIX_FMT_BGR4 = 20,
/** packed RGB 1:2:1, 8bpp, (msb)1B 2G 1R(lsb) */
AV_PIX_FMT_BGR4_BYTE = 21,
/** packed RGB 3:3:2, 8bpp, (msb)2R 3G 3B(lsb) */
AV_PIX_FMT_RGB8 = 22,
/** packed RGB 1:2:1 bitstream, 4bpp, (msb)1R 2G 1B(lsb), a byte contains two pixels, the first pixel in the byte is the one composed by the 4 msb bits */
AV_PIX_FMT_RGB4 = 23,
/** packed RGB 1:2:1, 8bpp, (msb)1R 2G 1B(lsb) */
AV_PIX_FMT_RGB4_BYTE = 24,
/** planar YUV 4:2:0, 12bpp, 1 plane for Y and 1 plane for the UV components, which are interleaved (first byte U and the following byte V) */
AV_PIX_FMT_NV12 = 25,
/** as above, but U and V bytes are swapped */
AV_PIX_FMT_NV21 = 26,
/** packed ARGB 8:8:8:8, 32bpp, ARGBARGB... */
AV_PIX_FMT_ARGB = 27,
/** packed RGBA 8:8:8:8, 32bpp, RGBARGBA... */
AV_PIX_FMT_RGBA = 28,
/** packed ABGR 8:8:8:8, 32bpp, ABGRABGR... */
AV_PIX_FMT_ABGR = 29,
/** packed BGRA 8:8:8:8, 32bpp, BGRABGRA... */
AV_PIX_FMT_BGRA = 30,
/** Y , 16bpp, big-endian */
AV_PIX_FMT_GRAY16BE = 31,
/** Y , 16bpp, little-endian */
AV_PIX_FMT_GRAY16LE = 32,
/** planar YUV 4:4:0 (1 Cr & Cb sample per 1x2 Y samples) */
AV_PIX_FMT_YUV440P = 33,
/** planar YUV 4:4:0 full scale (JPEG), deprecated in favor of PIX_FMT_YUV440P and setting color_range */
AV_PIX_FMT_YUVJ440P = 34,
/** planar YUV 4:2:0, 20bpp, (1 Cr & Cb sample per 2x2 Y & A samples) */
AV_PIX_FMT_YUVA420P = 35,
/** H.264 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_H264 = 36,
/** MPEG-1 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_MPEG1 = 37,
/** MPEG-2 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_MPEG2 = 38,
/** WMV3 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_WMV3 = 39,
/** VC-1 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_VC1 = 40,
/** packed RGB 16:16:16, 48bpp, 16R, 16G, 16B, the 2-byte value for each R/G/B component is stored as big-endian */
AV_PIX_FMT_RGB48BE = 41,
/** packed RGB 16:16:16, 48bpp, 16R, 16G, 16B, the 2-byte value for each R/G/B component is stored as little-endian */
AV_PIX_FMT_RGB48LE = 42,
/** packed RGB 5:6:5, 16bpp, (msb) 5R 6G 5B(lsb), big-endian */
AV_PIX_FMT_RGB565BE = 43,
/** packed RGB 5:6:5, 16bpp, (msb) 5R 6G 5B(lsb), little-endian */
AV_PIX_FMT_RGB565LE = 44,
/** packed RGB 5:5:5, 16bpp, (msb)1A 5R 5G 5B(lsb), big-endian, most significant bit to 0 */
AV_PIX_FMT_RGB555BE = 45,
/** packed RGB 5:5:5, 16bpp, (msb)1A 5R 5G 5B(lsb), little-endian, most significant bit to 0 */
AV_PIX_FMT_RGB555LE = 46,
/** packed BGR 5:6:5, 16bpp, (msb) 5B 6G 5R(lsb), big-endian */
AV_PIX_FMT_BGR565BE = 47,
/** packed BGR 5:6:5, 16bpp, (msb) 5B 6G 5R(lsb), little-endian */
AV_PIX_FMT_BGR565LE = 48,
/** packed BGR 5:5:5, 16bpp, (msb)1A 5B 5G 5R(lsb), big-endian, most significant bit to 1 */
AV_PIX_FMT_BGR555BE = 49,
/** packed BGR 5:5:5, 16bpp, (msb)1A 5B 5G 5R(lsb), little-endian, most significant bit to 1 */
AV_PIX_FMT_BGR555LE = 50,
/** HW acceleration through VA API at motion compensation entry-point, Picture.data[3] contains a vaapi_render_state struct which contains macroblocks as well as various fields extracted from headers */
AV_PIX_FMT_VAAPI_MOCO = 51,
/** HW acceleration through VA API at IDCT entry-point, Picture.data[3] contains a vaapi_render_state struct which contains fields extracted from headers */
AV_PIX_FMT_VAAPI_IDCT = 52,
/** HW decoding through VA API, Picture.data[3] contains a vaapi_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VAAPI_VLD = 53,
/** planar YUV 4:2:0, 24bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian */
AV_PIX_FMT_YUV420P16LE = 54,
/** planar YUV 4:2:0, 24bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian */
AV_PIX_FMT_YUV420P16BE = 55,
/** planar YUV 4:2:2, 32bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian */
AV_PIX_FMT_YUV422P16LE = 56,
/** planar YUV 4:2:2, 32bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian */
AV_PIX_FMT_YUV422P16BE = 57,
/** planar YUV 4:4:4, 48bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian */
AV_PIX_FMT_YUV444P16LE = 58,
/** planar YUV 4:4:4, 48bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian */
AV_PIX_FMT_YUV444P16BE = 59,
/** MPEG4 HW decoding with VDPAU, data[0] contains a vdpau_render_state struct which contains the bitstream of the slices as well as various fields extracted from headers */
AV_PIX_FMT_VDPAU_MPEG4 = 60,
/** HW decoding through DXVA2, Picture.data[3] contains a LPDIRECT3DSURFACE9 pointer */
AV_PIX_FMT_DXVA2_VLD = 61,
/** packed RGB 4:4:4, 16bpp, (msb)4A 4R 4G 4B(lsb), little-endian, most significant bits to 0 */
AV_PIX_FMT_RGB444LE = 62,
/** packed RGB 4:4:4, 16bpp, (msb)4A 4R 4G 4B(lsb), big-endian, most significant bits to 0 */
AV_PIX_FMT_RGB444BE = 63,
/** packed BGR 4:4:4, 16bpp, (msb)4A 4B 4G 4R(lsb), little-endian, most significant bits to 1 */
AV_PIX_FMT_BGR444LE = 64,
/** packed BGR 4:4:4, 16bpp, (msb)4A 4B 4G 4R(lsb), big-endian, most significant bits to 1 */
AV_PIX_FMT_BGR444BE = 65,
/** 8bit gray, 8bit alpha */
AV_PIX_FMT_YA8 = 66,
// etc
FrameRate indicates the number of frames per second the video should be played back at (it has nothing to do with the number or the timing of images you actually record, although it provides a basis for the encoding bitrate). So, in the case of 30 FPS, to cover 20 seconds of video, you need to call record() 30 * 20 = 600 times. If you do no call record() 600 times, then this is the cause of your problem.
VideoBitrate provides the video bitrate (in bits per second) at which the video stream should be encoded at. Wikipedia has a nice article about that.