Given a toy struct with a default constructor like this one:
struct RGB {
unsigned char r, g, b;
RGB()
:r(0), g(0), b(0) {}
};
How do I initialise one to a specific colour, assuming I don't have access to the source code to add my own constructor.
I don't understand fully why these don't work:
// OK, I can sort-of accept this one
RGB red = {255, 0, 0};
// Not shorthand for green.r=0, green.g=255, green.b=0;?
RGB green = {.r = 0, .g = 255, .b = 0};
// I seem to be missing a constructor that accepts a list?
RGB blue{0, 0, 255};
Is there any other C++11 way to shorten the good old-fashioned:
RGB yellow;
yellow.r = 255;
yellow.g = 255;
yellow.b = 0;
Furthermore, how could I minimally modify the struct declaration to support any of the above, as well as having a default initialisation method.
If you have no possibility to add default arguments to the struct's constructor, how about a helper function:
RGB makeRGB(unsigned char r, unsigned char g, unsigned char b)
{
RGB result;
result.r = r;
result.g = g;
result.b = b;
return result;
}
Which can be used like so:
RGB red = makeRGB(255, 0, 0);
Return value optimization will take care of the temporary and provide a no-overhead solution unless you are using a terrible compiler.
The ideal solution would be modifying the default constructor to take optional arguments:
struct RGB {
unsigned char r, g, b;
explicit RGB(unsigned char r, unsigned char g, unsigned char b)
:r(r), g(g), b(b) {}
RGB() : RGB(0, 0, 0) {}
};
Which can be used like you would expect:
RGB red(255, 0, 0);
RGB green{0, 255, 0};
RGB blue;
blue.b = 255;
Live demo here.
Given
struct color
{
color(std::initializer_list<float> list) = delete;
float r, g, b;
};
color c = {1,2,3} and color c {1,2,3} are the same.
syntax like color c = {.r = 0, .g = 255, .b = 0}; is specific to C programming language, not C++.
color c = {1,2,3}; is perfectly fine in C++11. It is called aggregate initialization. And it does even override explicit constructor from initializer_list. Link describes how to explicitly delete it.
Constructor from initializer_list can be explicitly called by color c({1,2,3})
Related
I'm trying to convert a HDR image float array I load to a 10-bit DWORD with WIC.
The type of the loading file is GUID_WICPixelFormat128bppPRGBAFloat and I got an array of 4 floats per color.
When I try to convert these to 10 bit as follows:
struct RGBX
{
unsigned int b : 10;
unsigned int g : 10;
unsigned int r : 10;
int a : 2;
} rgbx;
(which is the format requested by the NVIDIA encoding library for 10-bit rgb),
then I assume I have to divide each of the floats by 1024.0f in order to get them inside the 10 bits of a DWORD.
However, I notice that some of the floats are > 1, which means that their range is not [0,1] as it happens when the image is 8 bit.
What would their range be? How to store a floating point color into a 10-bits integer?
I'm trying to use the NVidia's HDR encoder which requires an ARGB10 like the above structure.
How is the 10 bit information of a color stored as a floating point number?
Btw I tried to convert with WIC but conversion from GUID_WICPixelFormat128bppPRGBAFloat to GUID_WICPixelFormat32bppR10G10B10A2 fails.
HRESULT ConvertFloatTo10(const float* f, int wi, int he, std::vector<DWORD>& out)
{
CComPtr<IWICBitmap> b;
wbfact->CreateBitmapFromMemory(wi, he, GUID_WICPixelFormat128bppPRGBAFloat, wi * 16, wi * he * 16, (BYTE*)f, &b);
CComPtr<IWICFormatConverter> wf;
wbfact->CreateFormatConverter(&wf);
wf->Initialize(b, GUID_WICPixelFormat32bppR10G10B10A2, WICBitmapDitherTypeNone, 0, 0, WICBitmapPaletteTypeCustom);
// This last call fails with 0x88982f50 : The component cannot be found.
}
Edit: I found a paper (https://hal.archives-ouvertes.fr/hal-01704278/document), is this relevant to this question?
Floating-point color content that is greater than the 0..1 range is High Dynamic Range (HDR) content. If you trivially convert it to 10:10:10:2 UNORM then you are using 'clipping' for values over 1. This doesn't give good results.
SDR 10:10:10 or 8:8:8
You should instead use tone-mapping which converts the HDR signal to a SDR (Standard Dynamic Range a.k.a. 0..1) before or as part of doing the conversion to 10:10:10:2.
There a many different approaches to tone-mapping, but a common 'generic' solution is the Reinhard tone-mapping operator. Here's an implementation using DirectXTex.
std::unique_ptr<ScratchImage> timage(new (std::nothrow) ScratchImage);
if (!timage)
{
wprintf(L"\nERROR: Memory allocation failed\n");
return 1;
}
// Compute max luminosity across all images
XMVECTOR maxLum = XMVectorZero();
hr = EvaluateImage(image->GetImages(), image->GetImageCount(), image->GetMetadata(),
[&](const XMVECTOR* pixels, size_t w, size_t y)
{
UNREFERENCED_PARAMETER(y);
for (size_t j = 0; j < w; ++j)
{
static const XMVECTORF32 s_luminance = { { { 0.3f, 0.59f, 0.11f, 0.f } } };
XMVECTOR v = *pixels++;
v = XMVector3Dot(v, s_luminance);
maxLum = XMVectorMax(v, maxLum);
}
});
if (FAILED(hr))
{
wprintf(L" FAILED [tonemap maxlum] (%08X%ls)\n", static_cast<unsigned int>(hr), GetErrorDesc(hr));
return 1;
}
maxLum = XMVectorMultiply(maxLum, maxLum);
hr = TransformImage(image->GetImages(), image->GetImageCount(), image->GetMetadata(),
[&](XMVECTOR* outPixels, const XMVECTOR* inPixels, size_t w, size_t y)
{
UNREFERENCED_PARAMETER(y);
for (size_t j = 0; j < w; ++j)
{
XMVECTOR value = inPixels[j];
const XMVECTOR scale = XMVectorDivide(
XMVectorAdd(g_XMOne, XMVectorDivide(value, maxLum)),
XMVectorAdd(g_XMOne, value));
const XMVECTOR nvalue = XMVectorMultiply(value, scale);
value = XMVectorSelect(value, nvalue, g_XMSelect1110);
outPixels[j] = value;
}
}, *timage);
if (FAILED(hr))
{
wprintf(L" FAILED [tonemap apply] (%08X%ls)\n", static_cast<unsigned int>(hr), GetErrorDesc(hr));
return 1;
}
HDR10
UPDATE: If you are trying to convert HDR floating-point content to an "HDR10" signal, then you need to do:
Color-space rotate from Rec.709 or P3D65 to Rec.2020.
Normalize for 'paper white' / 10,000 nits.
Apply the ST.2084 gamma curve.
Quantize to 10-bit.
// HDTV to UHDTV (Rec.709 color primaries into Rec.2020)
const XMMATRIX c_from709to2020 =
{
0.6274040f, 0.0690970f, 0.0163916f, 0.f,
0.3292820f, 0.9195400f, 0.0880132f, 0.f,
0.0433136f, 0.0113612f, 0.8955950f, 0.f,
0.f, 0.f, 0.f, 1.f
};
// DCI-P3-D65 https://en.wikipedia.org/wiki/DCI-P3 to UHDTV (DCI-P3-D65 color primaries into Rec.2020)
const XMMATRIX c_fromP3D65to2020 =
{
0.753845f, 0.0457456f, -0.00121055f, 0.f,
0.198593f, 0.941777f, 0.0176041f, 0.f,
0.047562f, 0.0124772f, 0.983607f, 0.f,
0.f, 0.f, 0.f, 1.f
};
// Custom Rec.709 into Rec.2020
const XMMATRIX c_fromExpanded709to2020 =
{
0.6274040f, 0.0457456f, -0.00121055f, 0.f,
0.3292820f, 0.941777f, 0.0176041f, 0.f,
0.0433136f, 0.0124772f, 0.983607f, 0.f,
0.f, 0.f, 0.f, 1.f
};
inline float LinearToST2084(float normalizedLinearValue)
{
const float ST2084 = pow((0.8359375f + 18.8515625f * pow(abs(normalizedLinearValue), 0.1593017578f)) / (1.0f + 18.6875f * pow(abs(normalizedLinearValue), 0.1593017578f)), 78.84375f);
return ST2084; // Don't clamp between [0..1], so we can still perform operations on scene values higher than 10,000 nits
}
// You can adjust this up to 10000.f
float paperWhiteNits = 200.f;
hr = TransformImage(image->GetImages(), image->GetImageCount(), image->GetMetadata(),
[&](XMVECTOR* outPixels, const XMVECTOR* inPixels, size_t w, size_t y)
{
UNREFERENCED_PARAMETER(y);
const XMVECTOR paperWhite = XMVectorReplicate(paperWhiteNits);
for (size_t j = 0; j < w; ++j)
{
XMVECTOR value = inPixels[j];
XMVECTOR nvalue = XMVector3Transform(value, c_from709to2020);
// Some people prefer the look of using c_fromP3D65to2020
// or c_fromExpanded709to2020 instead.
// Convert to ST.2084
nvalue = XMVectorDivide(XMVectorMultiply(nvalue, paperWhite), c_MaxNitsFor2084);
XMFLOAT4A tmp;
XMStoreFloat4A(&tmp, nvalue);
tmp.x = LinearToST2084(tmp.x);
tmp.y = LinearToST2084(tmp.y);
tmp.z = LinearToST2084(tmp.z);
nvalue = XMLoadFloat4A(&tmp);
value = XMVectorSelect(value, nvalue, g_XMSelect1110);
outPixels[j] = value;
}
}, *timage);
You should really take a look at texconv.
Reference
Reinhard et al., "Photographic tone reproduction for digital images", ACM Transactions on Graphics, Volume 21, Issue 3 (July 2002). ACM DL.
#ChuckWalbourn answer is helpful, however I don't want to tonemap to [0,1] as there is no point in tonemapping to SDR then going to 10-bit HDR.
What I 'd think it's correct is to scale to [0,4] instead by first using g_XMFour.
const XMVECTOR scale = XMVectorDivide(
XMVectorAdd(g_XMFour, XMVectorDivide(v, maxLum)),
XMVectorAdd(g_XMFour, v));
then using a specialized 10-bit store which scales by 255 instead of 1023:
void XMStoreUDecN4a(DirectX::PackedVector::XMUDECN4* pDestination,DirectX::FXMVECTOR V)
{
using namespace DirectX;
XMVECTOR N;
static const XMVECTOR Scale = { 255.0f, 255.0f, 255.0f, 3.0f };
assert(pDestination);
N = XMVectorClamp(V, XMVectorZero(), g_XMFour);
N = XMVectorMultiply(N, Scale);
pDestination->v = ((uint32_t)DirectX::XMVectorGetW(N) << 30) |
(((uint32_t)DirectX::XMVectorGetZ(N) & 0x3FF) << 20) |
(((uint32_t)DirectX::XMVectorGetY(N) & 0x3FF) << 10) |
(((uint32_t)DirectX::XMVectorGetX(N) & 0x3FF));
}
And then a specialized 10-bit load which divides with 255 instead of 1023:
DirectX::XMVECTOR XMLoadUDecN4a(DirectX::PackedVector::XMUDECN4* pSource)
{
using namespace DirectX;
fourx vectorOut;
uint32_t Element;
Element = pSource->v & 0x3FF;
vectorOut.r = (float)Element / 255.f;
Element = (pSource->v >> 10) & 0x3FF;
vectorOut.g = (float)Element / 255.f;
Element = (pSource->v >> 20) & 0x3FF;
vectorOut.b = (float)Element / 255.f;
vectorOut.a = (float)(pSource->v >> 30) / 3.f;
const DirectX::XMVECTORF32 j = { vectorOut.r,vectorOut.g,vectorOut.b,vectorOut.a };
return j;
}
what i wanna do is to get 4 vertice pixel points(2D coordinate) of QR-code,
and input both them and World-3D-coordination of QR-code as parameter of function, solvePnP.
but when i compile, solvePnP doesn't work! the error occurred something like this..
Assertion failed (npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints,checkVector(2, CV_64F))) in cv::solvePnP
in solvePnP, it declared that it can use std::Vector type, or cv::Mat type, so i tried to change both of those date types. but it still can't..
my source code is below,
***Point3d pt[4];
pt[0] = Point3d(0, 0, 0);
pt[1] = Point3d(0, 178, 0);
pt[2] = Point3d(178, 178, 0);
pt[3] = Point3d(178, 0, 0);
vector<Point3f> objectPoints;
for (int i = 0; i < 4; i++)
objectPoints.push_back(pt[i]); // 3d world coordinates
Point2d point[4];***
and after this procedure, i got the 4 vertices coordinates into point[] from QR code. and next is,
vector<Point2f> imagePoints;
for (int i = 0; i < 4; i++)
imagePoints.push_back(point[i]); // 2d image coordinates
//Mat objPts(4, 1, CV_64F, pt);
//Mat imgPts(4, 1, CV_64F, point);
// camera parameters
double Intrinsic[] = { fx, 0, cx, 0, fy, cy, 0, 0, 1 };
Mat Camera_Matrix(3, 3, CV_64FC1, Intrinsic);
double Distort[] = { k1, k2, p1, p2 };
Mat DistortCoeffs(4, 1, CV_64FC1, Distort);
// estimate camera pose
Mat rvec, tvec; // rotation & translation vectors
solvePnP(objectPoints, imagePoints, Camera_Matrix, DistortCoeffs, rvec, tvec);
please help!
In your code, the array "point/pt" is of Point2d, but "objectPoints/imagePoints" is a vector of Point2f.
By the way, different to the documentation, it seems that the solvePnP function requires the object points and image points to be in the format of vector or cv::Mat(N*2/3). I tried using cv::Mat(2/3*N) as input, but the same error of assertion failure appears.
You may follow the official example to help debug. It locates in /samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src.
I can not debug this programme. I am going to convert RGB to HSI and then Put a histogram in anyone channel. before Fourier and after Fourier.
#include "stdafx.h"
#include <opencv2/opencv.hpp>
#include <opencv\highgui.h>
#include <iostream>
// ass.cpp : Converts the given RGB image to HSI colour space then
// performs Fourier filtering on a particular channel.
//
using namespace std;
using namespace cv;
// Declarations of 4 unfinished functions
Mat rgb2hsi(const Mat& rgb); // converts RGB image to HSI space
Mat hsi2rgb(const Mat& hsi); // converts HSI image to RGB space
Mat histogram(const Mat& im); // returns the histogram of the selected channel in HSI space
// void filter(Mat& im);// // performs frequency-domain filtering on a single-channel image
int main(int argc, char* argv[])
{
if (argc < 2) // check number of arguments
{
cerr << "feed me something!!" << endl; // no arguments passed
return -1;
}
string path = argv[1];
Mat im; // load an RGB image
Mat hsi = rgb2hsi(im); // convert it to HSI space
Mat slices[3]; // 3 channels of the converted HSI image
im = imread(path); //try to load path
if (im.empty()) // loaded Sucessfully
{
cerr << "I Cannot load the file : ";
return -1;
}
imshow("BEFORE", im);
split(hsi, slices); // split up the packed HSI image into an array of matrices
Mat& h = slices[0];
Mat& s = slices[1];
Mat& i = slices[2]; // references to H, S, and I layers
Mat hist1, hist2; // histogram of the selected channel before and after filtering
Going to apply histogram. May be I miss some header. draw is not taken.
Mat histogram(const Mat& im)
{
Mat hist;
const float range[] = { 0, 255 };
const int channels[] = { 0 };
const int bins = range[1] - range[0];
const int dims[] = { bins, 1 };
const Size binSize(2, 240);
const float* ranges[] = { range };
// calculate the histogram
calcHist(&im, 1, channels, Mat(), hist, 1, dims, ranges);
Mat draw = Mat::zeros(binSize.height, binSize.width * bins, CV_8UC3);
double maxVal;
minMaxLoc(hist, NULL, &maxVal, 0, 0);
for (int b = 0; b < bins; b++)
{
float val = hist.at<float>(b, 0);
int x0 = binSize.width * b;
int y0 = draw.rows - val / maxVal * binSize.height + 1;
int x1 = binSize.width * (b + 1) - 1;
int y1 = draw.rows - 1;
rectangle(draw,0, cv::(Point(x0, y0), cv::Point(x1, y1)), Scalar::all(255), CV_FILLED);
}
return draw;
}
imwrite("input-original.png", rgb); // write the input image
imwrite("hist-original.png", histogram(h)); // write the histogram of the selected channel
filter(h); // perform filtering
merge(slices, 3, hsi); // combine the separated H, S, and I layers to a big packed matrix
rgb = hsi2rgb(hsi); // convert HSI back to RGB colour space
imwrite("input-filtered.png", rgb); // write the filtered image
imwrite("hist-filtered.png", histogram(h)); // and the histogram of the filtered channel
return 0;
}
Mat rgb2hsi(const Mat& rgb)
{
Mat slicesRGB[3];
Mat slicesHSI[3];
Mat &r = slicesRGB[0], &g = slicesRGB[1], &b = slicesRGB[2];
Mat &h = slicesHSI[0], &s = slicesHSI[1], &i = slicesHSI[2];
split(rgb, slicesRGB);
//
// TODO: implement colour conversion RGB => HSI
//
// begin of conversion code
h = r * 1.0f;
s = g * 1.0f;
i = b * 1.0f;
// end of conversion code
Mat hsi;
merge(slicesHSI, 3, hsi);
return hsi;
}
Mat hsi2rgb(const Mat& hsi)
{
Mat slicesRGB[3];
Mat slicesHSI[3];
Mat &r = slicesRGB[0], &g = slicesRGB[1], &b = slicesRGB[2];
Mat &h = slicesHSI[0], &s = slicesHSI[1], &i = slicesHSI[2];
split(hsi, slicesHSI);
// begin of conversion code
r = h * 1.0f;
g = s * 1.0f;
b = i * 1.0f;
// end of conversion code
Mat rgb;
merge(slicesRGB, 3, rgb);
return rgb;
}
Mat histogram(const Mat& im)
{
Mat hist;
const float range[] = { 0, 255 };
const int channels[] = { 0 };
const int bins = range[1] - range[0];
const int dims[] = { bins, 1 };
const Size binSize(2, 240);
const float* ranges[] = { range };
// calculate the histogram
calcHist(&im, 1, channels, Mat(), hist, 1, dims, ranges);
Mat draw = Mat::zeros(binSize.height, binSize.width * bins, CV_8UC3);
double maxVal;
minMaxLoc(hist, NULL, &maxVal, 0, 0);
for (int b = 0; b < bins; b++)
{
float val = hist.at<float>(b, 0);
int x0 = binSize.width * b;
int y0 = draw.rows - val / maxVal * binSize.height + 1;
int x1 = binSize.width * (b + 1) - 1;
int y1 = draw.rows - 1;
rectangle(draw, Point(x0, y0), Point(x1, y1), Scalar::all(255), CV_FILLED);
}
return draw;
}
void filter(Mat& im)
{
int type = im.type();
// Convert pixel data from unsigned 8-bit integers (0~255)
// to 32-bit floating numbers, as required by cv::dft
if (type != CV_32F) im.convertTo(im, CV_32F);
// Perform 2-D Discrete Fourier Transform
Mat f;
dft(im, f, DFT_COMPLEX_OUTPUT + DFT_SCALE); // do DFT
// Separate the packed complex matrix to two matrices
Mat complex[2];
Mat& real = complex[0]; // the real part
Mat& imag = complex[1]; // the imaginary part
split(f, complex); // dft(im) => {real,imag}
// Frequency domain filtering
int xc = im.cols / 2; // find (xc,yc) the highest
int yc = im.rows / 2; // frequency component
for (int y = 0; y < im.rows; y++) // go through each row..
{
for (int x = 0; x < im.cols; x++) // then through each column..
{
//
// TODO: Design your formula here to decide if the component is
// discarded or kept.
//
if (false) // override this condition
{
real.at<float>(y, x) = 0;
imag.at<float>(y, x) = 0;
}
}
}
// Pack the real and imaginary parts
// back to the 2-channel matrix
merge(complex, 2, f); // {real,imag} => f
// Perform 2-D Inverse Discrete Fourier Transform
idft(f, im, DFT_REAL_OUTPUT); // do iDFT
// convert im back to it's original type
im.convertTo(im, type);
}
Error List
1 IntelliSense: expected a ';' d:\709
Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 48 2 Dibya_project
2 IntelliSense: identifier "draw" is undefined d:\709
Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 70 13 Dibya_project
3 IntelliSense: no instance of overloaded function "rectangle"
matches the argument list
argument types are: (, int, , cv::Scalar_, int) d:\709
Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 72 4 Dibya_project
4 IntelliSense: expected an identifier d:\709
Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 72 26 Dibya_project
5 IntelliSense: no instance of constructor "cv::Point_<Tp>::Point
[with _Tp=int]" matches the argument list
argument types are: (, double __cdecl (double _X)) d:\709 Tutorial\Dibya_project\Dibya_project\Dibya_project.cpp 72 27 Dibya_project
broken here (in Mat histogram(...)):
rectangle(draw,0, cv::(Point(x0, y0), cv::Point(x1, y1)), Scalar::all(255), CV_FILLED);
should be either:
rectangle(draw,0, cv::Rect(Point(x0, y0), cv::Point(x1, y1)), Scalar::all(255), CV_FILLED);
or:
rectangle(draw,0, Point(x0, y0), cv::Point(x1, y1), Scalar::all(255), CV_FILLED);
I think there is a typo in including the highgui header file.
I'd like to write an image from a kernel to a specified place in device memory which I define by an IntPtr.
Although it's not directly related to this problem, it's the RenderTexture from Unity which I want to change from inside the kernel in order to pass it to a shader which will visualize my algorithm on the GPU.
So far I tried this:
CLkernel.SetArgument (0, (IntPtr)(width*height*4*sizeof(float)), pointerToRT);
Which threw InvalidArgumentSize as I cannot specify it as an image, and this:
renderTexture = new ComputeImage2D (CLcontext, ComputeMemoryFlags.WriteOnly | ComputeMemoryFlags.UseHostPointer,
new ComputeImageFormat (ComputeImageChannelOrder.Rgba, ComputeImageChannelType.Float),
width, height, 0, pointerToRT);
CLkernel.SetMemoryArgument (0, renderTexture);
which resulted in an InvalidHostPointer Exception, as the pointer points to a place already in the device memory.
This is the Kernel code:
kernel void WriteToRenderTexture (
write_only image2d_t bdsRT,
global float* terrainHeight )
{
int width = get_global_size (0);
int x = get_global_id (0);
int y = get_global_id (1);
int pixel = x + y * width;
int2 coords = { x, y };
float4 value = { 0, terrainHeight [pixel * 3], 0.5, 0 };
write_imagef (bdsRT, coords, value);
}
Any ideas how I'm able to do so with Cloo?
I need some mathematical algorithm for converting the value from #FF0000 to #00FF00.
I do not want to go via a black value. The conversion should go from red to some cyan color (light blue) then to turn into green. If just needed to go from FF0000 to 000000 and then to 00FF00 it would be very easy.
The goal is to have levels let say from 0 to 1000 where 0 is #FF0000 and 1000 is #00FF00.
All I need is some smart mapping.
From the comments, I understand you want to be able to show colors between red and green, and that you don't want to "go through black", which I understand as "I want to keep the same intensity".
For this to work, you need to change color spaces. Instead of RGB move to HSL. Take the HSL value for your red, then the HSL value for your green, and interpolate between them in HSL space.
Convert all the intermediate values back to RGB, and there you have your red to green range.
For a general solution, Bart's suggestion to use the HSV space is good. Pooya's and Emilio's answers will interpolate between red and green in the RGB space, which will yield dark yellow/olive in the middle.
If you need a gradient only between red and green, I suggest the following quick solution: Interpolate between red and yellow (#FFFF00) in the first half, and between yellow and green in the second half:
unsigned int red_green_interpol(double x)
{
if (x <= 0.0) return 0xFF0000;
if (x >= 1.0) return 0x00FF00;
if (x < 0.5) {
unsigned int g = 510 * x;
return 0xFF0000 | g << 8;
} else {
unsigned int r = 510 * (1.0 - x);
return 0x00FF00 | r << 16;
}
}
I used a double between 0.0 and 1.0 instead of your integer range from 0 to 1000, but it shouldn't be difficult to adapt it.
maybe something like this :
c :
void print_color ( int value )
{
int r=(255*value)/1000;
printf("#%02x%02x00",r,255-r)
}
edit !
sorry , i didn't read question carefully!
to represent all three colors :
c:
void print_color ( int value )
{
int curr=(3*value)/1000;
int color[3];
color[curr%3]=(255* (value-curr*(1000/3)))/(1000/3);
color[(curr+1)%3]=255-color[curr];
color[(curr+2)%3]=0;
printf("#%02x%02x%02x",color[0],color[1],color[2]);
}
(indexes can be changed for RGB orders...)
well already answered but you wanted something easy:
void gradient()
{
const int _a=3; // color components order
const int _r=2;
const int _g=1;
const int _b=0;
union _col // color
{
DWORD dd; // unsigned 32bit int (your color)
BYTE db[4]; // 4 x BYTE
} col;
col.db[_r]=255; // start color 0x00FF0000
col.db[_g]= 0;
col.db[_b]= 0;
col.db[_a]= 0;
for (int i=0;i<255;i++) // gradient cycle (color 'intensity' = 255)
{
col.db[_r]--;
col.db[_g]++;
// here use your color col.dd;
}
}
for levels case there are only 255 levels in this case so remove for and add this instead (level=i 0..255):
col.db[_r]=255-i;
col.db[_g]= i;
col.db[_b]= 0;
col.db[_a]= 0;