Upscaling images on Retina devices - image

I know images upscale by default on retina devices, but the default scaling makes the images blurry.
I was wondering if there was a way to scale it in nearest-neighbor mode, where there are no transparent pixels created, but rather each pixel multiplied by 4, so it looks like it would on a non retina device.
Example of what I'm talking about can be seen in the image below.
example http://cclloyd.com/downloads/sdfsdf.png

CoreGraphics will not do a 2x scale like that, you need to write a bit of explicit pixel mapping logic to do something like this. The following is some code I used to do this operation, you would of course need to fill in the details as this operates on an input buffer of pixels and writes to an output buffer of pixels that is 2x larger.
// Use special case "DOUBLE" logic that will simply duplicate the exact
// RGB value from the indicated pixel into the 2x sized output buffer.
int numOutputPixels = resizedFrameBuffer.width * resizedFrameBuffer.height;
uint32_t *inPixels32 = (uint32_t*)cgFrameBuffer.pixels;
uint32_t *outPixels32 = (uint32_t*)resizedFrameBuffer.pixels;
int outRow = 0;
int outColumn = 0;
for (int i=0; i < numOutputPixels; i++) {
if ((i > 0) && ((i % resizedFrameBuffer.width) == 0)) {
outRow += 1;
outColumn = 0;
}
// Divide by 2 to get the column/row in the input framebuffer
int inColumn = outColumn / 2;
int inRow = outRow / 2;
// Get the pixel for the row and column this output pixel corresponds to
int inOffset = (inRow * cgFrameBuffer.width) + inColumn;
uint32_t pixel = inPixels32[inOffset];
outPixels32[i] = pixel;
//fprintf(stdout, "Wrote 0x%.10X for 2x row/col %d %d (%d), read from row/col %d %d (%d)\n", pixel, outRow, outColumn, i, inRow, inColumn, inOffset);
outColumn += 1;
}
This code of course depends on you creating a buffer of pixels and then wrapping it back up into CFImageRef. But, you can find all the code to do that kind of thing easily.

Related

Using Processing for image visualization: pixel color thresholds

Image to be manipulated, hoping to identify each white dot on each picture with a counter
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
}
void draw () {
loadPixels();
blk.loadPixels();
int i = 0;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x+y*width;
pixels [loc] = blk.pixels[loc];
if (blk.pixels[loc] == 0) {
if (blk.pixels [loc]+1 != 0) {
i++;
}
}
float r = red(blk.pixels[loc]);
float g = green(blk.pixels[loc]);
float b = blue(blk.pixels[loc]);
pixels [loc] = color(r, g, b);
}
}
System.out.println (i);
updatePixels();
}
The main problem is within my if statement, not sure to approach it logically.
I'm unsure where this is exactly going, but I can help you find the white pixels. Here, I just counted 7457 "white" pixels (then I turned them red so you can see where they are and adjust the threshold if you want to get more or less of them):
Of course, this is just a proof of concept which you should be able to adapt to your needs.
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
blk.loadPixels();
int whitePixelsCount = 0;
// I'm doing this in the 'setup()' method because I don't need to do it 60 times per second
// Once it's done once I can just use the image as modified unless you want several
// different versions (which you can calculate once anyway then store in different PImages)
for (int i = 0; i < blk.width * blk.height; i++) {
float r = red(blk.pixels[i]);
float g = green(blk.pixels[i]);
float b = blue(blk.pixels[i]);
// In RGB, the brightness of each color is represented by it's intensity
// So here I'm checking the "average intensity" of the color to see how bright it is
// And I compare it to 100 since 255 is the max and I wanted this simple, but you can
// play with this threshold as much as you like
if ((r+g+b)/3 > 100) {
whitePixelsCount++;
// Here I'm making those pixels red so you can see where they are.
// It's easier to adjust the threshold if you can see what you're doing
blk.pixels[i] = color(255, 0, 0);
}
}
println(whitePixelsCount);
updatePixels();
}
void draw () {
image(blk, 0, 0);
}
In short (you'll read this in the comments too), we count the pixels according to a threshold we can adjust. To make things more obvious for you, I colored the "white" pixels red. You can lower or raise the threshold according to what you see this way, and once you know what you want you can get rid of the color.
There is a difficulty here, which is that the image isn't "black and white", but more greyscale - which is totally normal, but makes things harder for what you seem to be trying to do. You'll probably have to tinker a lot to get to the exact ratio which interests you. It could help a lot if you edited the original image in GiMP or another image software which lets you adjust contrast and brightness. It's kinda cheating, but it it doesn't work right off the bat this strategy could save you some work.
Have fun!

How to flatten an image using OpenCV correctly for image processing and then convert it to Mat again?

I have an image, read using "cv::imread". I have to flatten it so that I could use CUDA & GPU for my image processing algorithms acceleration.
My problem: When I read my image, I can show it correctly using imshow, however when I flatten it and convert it to a Mat object to be used with imshow, only part of my image is displayed. The size of the output image is also wrong, meaning that some data is really lost. What's the problem with my for loop?
// The problematic part of my code
// The Camera Man gray test image
const char* img_gray_name = "../../Test_Images/cameraman.tiff";
const char* img_blur_name = "../cameraman-blur.tiff";
const char* image_general_name = "cameraman_blur";
cv::Mat img = cv::imread(img_gray_name);
unsigned long int img_gray_size = img.rows * img.cols * sizeof(uchar);
uchar *h_img_in;// input image, converted to a flat array to be
// processed by GPU
h_img_in = (uchar *)malloc(img_gray_size);
//*************** The bug should be here! ***************//
for (int i = 0; i < img.rows; ++i) {
for (int j = 0; j < img.cols; ++j) {
h_img_in[i*img.cols+j] = img.at<uchar>(i, j);
}
}
Mat img_test;
img_test = Mat(cv::Size(img.cols, img.rows), CV_8U, h_img_in);
imwrite(img_blur_name, img_test);
// create image window named "camera man"
cv::namedWindow(image_general_name);
// show the image on window
cv::imshow(image_general_name, img_test);
P.S.: I also tested with a new 2D array instead of 1D h_img_in, result is the same; This means that something goes wrong with my usage of "img.at(i, j)".

Can't isolate pixels from av_frame_copy_to_buffer

I'm trying to pull the YUV pixel data from an AVFrame, modify the pixels, and put it back into FFmpeg.
I'm currently using this to retrieve the YUV buffer
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(base->format);
int baseSize = av_image_get_buffer_size(base->format, base->width, base->height, 32);
uint8_t *baseBuffer = (uint8_t*)malloc(baseSize);
av_image_copy_to_buffer(baseBuffer, baseSize, base->data, base->linesize, base->format, base->width, base->height, 32);
But I can't seem to correctly target pixels in that buffer. From the source code they seem to be stacking the planes on top of each other, leading me to attempt this
int width = base->width;
int height = base->height;
int chroma2h = desc->log2_chroma_h;
int linesizeY = base->linesize[0];
int linesizeU = base->linesize[1];
int linesizeV = base->linesize[2];
int chromaHeight = (height + (1 << chroma2h) -1) >> chroma2h;
int x = 100;
int y = 100;
uint8_t *vY = base;
uint8_t *vU = base +(linesizeY*height);
uint8_t *vV = base +((linesizeY*height) + (linesizeU*chromaHeight));
vY+= x + (y * linesizeY);
vU+= x + (y * linesizeU);
vV+= x + (y * linesizeV);
Using that, if I try to modify pixels from a range of 300,300-400,400 I get a small box darker than the rest of the video, along with horizontal stripes of darkness along the video. The original color is still there, so I think I'm still touching the Y plane on all 3 pointers.
How can I actually hit the pixels I want to hit?

getting image color information from both RGB32 and indexed type images

I am trying to access the image colors in a QImage.
The method that I found most in docs is based on the scanline function...
I tried and it worked... on RGB32 images. I had surprising - and unpleasant results when using the exact method to get color data for 8 bit indexed or monochrome images.
This was my code:
// note RGBTriple is a struct containing unsigned R, G, B
// rgbImage.pixels is a RGBTriple* array
RGBTriple* pTriple = rgbImage.pixels;
for (int y = 0; y < source.height(); y++)
{
const unsigned char* pScanLine = source.scanLine(y);
for (int x = 0; x < source.width(); x++)
{
QRgb* color = (QRgb*)pScanLine;
pTriple->R = qRed(*color);
pTriple->G = qGreen(*color);
pTriple->B = qBlue(*color);
++pTriple;
pScanLine += 4;
}
}
Running the same code with images 8bit indexed or monochrome, I got errors in creating getting colors. The documentation says that scanline is aligned to multiples of 32b - but since that is a multiple of 8 and 2 I didn't think it would be a problem.
Once I found out that I am not getting correct results for all types of input images, I changed it to
RGBTriple* pTriple = rgbImage.pixels;
for (int y = 0; y < source.height(); y++)
{
for (int x = 0; x < source.width(); x++)
{
pTriple->R = qRed(source.pixel(x, y));
pTriple->G = qGreen(source.pixel(x, y));
pTriple->B = qBlue(source.pixel(x, y));
++pTriple;
}
}
Works perfectly... I wonder if it is slower or will have other unexpected behavior ? After all, I am using the pixel() function - even on indexed images - to get color information, which actually should be stored differently... that seems like it should fail...
Is there a way to make the first version, using scanline, work for other image types ?
Why does it seem like using scanline to get the data is the preferred method ?
I tried and it worked... on RGB32 images. I had surprising - and
unpleasant results when using the exact method to get color data for 8
bit indexed or monochrome images.
You should not be surprised because the indexed and monochrome images are different formats. The first code snippet you posted is based on the knowledge on how RGB32 (and RGB32 only) is layed out in memory.
Think about it. In a monochrome image R=G=B. So only one channel need to be saved in memory.
If your goal is to obtain an rgb image inside rgbImage.pixels use QImage::convertToFormat() :
QImage source;
QImage dest = source.convertToFormat( QImage::Format_RGB888 );
memcpy(rgbImage.pixels, dest.bits(),dest.byteCount () );

Function for creating color wheels [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This is something I've pseudo-solved many times and have never quite found a solution for.
The problem is to come up with a way to generate N colors, that are as distinguishable as possible where N is a parameter.
My first thought on this is "how to generate N vectors in a space that maximize distance from each other."
You can see that the RGB (or any other scale you use that forms a basis in color space) are just vectors. Take a look at Random Point Picking. Once you have a set of vectors that are maximized apart, you can save them in a hash table or something for later, and just perform random rotations on them to get all the colors you desire that are maximally apart from each other!
Thinking about this problem more, it would be better to map the colors in a linear manner, possibly (0,0,0) → (255,255,255) lexicographically, and then distribute them evenly.
I really don't know how well this will work, but it should since, let us say:
n = 10
we know we have 16777216 colors (256^3).
We can use Buckles Algorithm 515 to find the lexicographically indexed color.. You'll probably have to edit the algorithm to avoid overflow and probably add some minor speed improvements.
It would be best to find colors maximally distant in a "perceptually uniform" colorspace, e.g. CIELAB (using Euclidean distance between L*, a*, b* coordinates as your distance metric) and then converting to the colorspace of your choice. Perceptual uniformity is achieved by tweaking the colorspace to approximate the non-linearities in the human visual system.
Some related resources:
ColorBrewer - Sets of colours designed to be maximally distinguishable for use on maps.
Escaping RGBland: Selecting Colors for Statistical Graphics - A technical report describing a set of algorithms for generating good (i.e. maximally distinguishable) colour sets in the hcl colour space.
Here is some code to allocate RGB colors evenly around a HSL color wheel of specified luminosity.
class cColorPicker
{
public:
void Pick( vector<DWORD>&v_picked_cols, int count, int bright = 50 );
private:
DWORD HSL2RGB( int h, int s, int v );
unsigned char ToRGB1(float rm1, float rm2, float rh);
};
/**
Evenly allocate RGB colors around HSL color wheel
#param[out] v_picked_cols a vector of colors in RGB format
#param[in] count number of colors required
#param[in] bright 0 is all black, 100 is all white, defaults to 50
based on Fig 3 of http://epub.wu-wien.ac.at/dyn/virlib/wp/eng/mediate/epub-wu-01_c87.pdf?ID=epub-wu-01_c87
*/
void cColorPicker::Pick( vector<DWORD>&v_picked_cols, int count, int bright )
{
v_picked_cols.clear();
for( int k_hue = 0; k_hue < 360; k_hue += 360/count )
v_picked_cols.push_back( HSL2RGB( k_hue, 100, bright ) );
}
/**
Convert HSL to RGB
based on http://www.codeguru.com/code/legacy/gdi/colorapp_src.zip
*/
DWORD cColorPicker::HSL2RGB( int h, int s, int l )
{
DWORD ret = 0;
unsigned char r,g,b;
float saturation = s / 100.0f;
float luminance = l / 100.f;
float hue = (float)h;
if (saturation == 0.0)
{
r = g = b = unsigned char(luminance * 255.0);
}
else
{
float rm1, rm2;
if (luminance <= 0.5f) rm2 = luminance + luminance * saturation;
else rm2 = luminance + saturation - luminance * saturation;
rm1 = 2.0f * luminance - rm2;
r = ToRGB1(rm1, rm2, hue + 120.0f);
g = ToRGB1(rm1, rm2, hue);
b = ToRGB1(rm1, rm2, hue - 120.0f);
}
ret = ((DWORD)(((BYTE)(r)|((WORD)((BYTE)(g))<<8))|(((DWORD)(BYTE)(b))<<16)));
return ret;
}
unsigned char cColorPicker::ToRGB1(float rm1, float rm2, float rh)
{
if (rh > 360.0f) rh -= 360.0f;
else if (rh < 0.0f) rh += 360.0f;
if (rh < 60.0f) rm1 = rm1 + (rm2 - rm1) * rh / 60.0f;
else if (rh < 180.0f) rm1 = rm2;
else if (rh < 240.0f) rm1 = rm1 + (rm2 - rm1) * (240.0f - rh) / 60.0f;
return static_cast<unsigned char>(rm1 * 255);
}
int _tmain(int argc, _TCHAR* argv[])
{
vector<DWORD> myCols;
cColorPicker colpick;
colpick.Pick( myCols, 20 );
for( int k = 0; k < (int)myCols.size(); k++ )
printf("%d: %d %d %d\n", k+1,
( myCols[k] & 0xFF0000 ) >>16,
( myCols[k] & 0xFF00 ) >>8,
( myCols[k] & 0xFF ) );
return 0;
}
Isn't it also a factor which order you set up the colors?
Like if you use Dillie-Os idea you need to mix the colors as much as possible.
0 64 128 256 is from one to the next. but 0 256 64 128 in a wheel would be more "apart"
Does this make sense?
I've read somewhere the human eye can't distinguish between less than 4 values apart. so This is something to keep in mind. The following algorithm does not compensate for this.
I'm not sure this is exactly what you want, but this is one way to randomly generate non-repeating color values:
(beware, inconsistent pseudo-code ahead)
//colors entered as 0-255 [R, G, B]
colors = []; //holds final colors to be used
rand = new Random();
//assumes n is less than 16,777,216
randomGen(int n){
while (len(colors) < n){
//generate a random number between 0,255 for each color
newRed = rand.next(256);
newGreen = rand.next(256);
newBlue = rand.next(256);
temp = [newRed, newGreen, newBlue];
//only adds new colors to the array
if temp not in colors {
colors.append(temp);
}
}
}
One way you could optimize this for better visibility would be to compare the distance between each new color and all the colors in the array:
for item in color{
itemSq = (item[0]^2 + item[1]^2 + item[2]^2])^(.5);
tempSq = (temp[0]^2 + temp[1]^2 + temp[2]^2])^(.5);
dist = itemSq - tempSq;
dist = abs(dist);
}
//NUMBER can be your chosen distance apart.
if dist < NUMBER and temp not in colors {
colors.append(temp);
}
But this approach would significantly slow down your algorithm.
Another way would be to scrap the randomness and systematically go through every 4 values and add a color to an array in the above example.
function random_color($i = null, $n = 10, $sat = .5, $br = .7) {
$i = is_null($i) ? mt_rand(0,$n) : $i;
$rgb = hsv2rgb(array($i*(360/$n), $sat, $br));
for ($i=0 ; $i<=2 ; $i++)
$rgb[$i] = dechex(ceil($rgb[$i]));
return implode('', $rgb);
}
function hsv2rgb($c) {
list($h,$s,$v)=$c;
if ($s==0)
return array($v,$v,$v);
else {
$h=($h%=360)/60;
$i=floor($h);
$f=$h-$i;
$q[0]=$q[1]=$v*(1-$s);
$q[2]=$v*(1-$s*(1-$f));
$q[3]=$q[4]=$v;
$q[5]=$v*(1-$s*$f);
return(array($q[($i+4)%6]*255,$q[($i+2)%6]*255,$q[$i%6]*255)); //[1]
}
}
So just call the random_color() function where $i identifies the color, $n the number of possible colors, $sat the saturation and $br the brightness.
To achieve "most distinguishable" we need to use a perceptual color space like Lab (or any other perceptually linear color space) other than RGB. Also, we can quantize this space to reduce the size of the space.
Generate the full 3D space with all possible quantized entries and run the K-means algorithm with K=N. The resulting centers/"means" should be approximately most distinguishable from each other.

Resources