Why are XVisuals repeated in xdpyinfo - x11

Looking into my output of xdpyinfo, I see a lot of Visuals of the exact same characteristics repeated. Why are they repeated?
For example,
visual:
visual id: 0x6e
class: TrueColor
depth: 32 planes
available colormap entries: 256 per subfield
red, green, blue masks: 0xff0000, 0xff00, 0xff
significant bits in color specification: 8 bits
visual:
visual id: 0x6f
class: TrueColor
depth: 32 planes
available colormap entries: 256 per subfield
red, green, blue masks: 0xff0000, 0xff00, 0xff
significant bits in color specification: 8 bits
0x6e and 0x6f are exactly the same.
A related question: A visual already has a concept of depth, so why is it required to pass both a depth and a visual to XCreateWindow?

The two visuals are not necessarily exactly the same. They may have different GLX properties. Run glxinfo -v to see them.
The depth of the visual is the maximal depth. My screen for instance has many visuals, all of them are of depth 24 or 32. The X server supports more depths, in my case 24, 1, 4, 8, 15, 16, and 32. In a visual of a given depth you can create a window of a smaller depth. The preceding is wrong. The depth of the visual is the only depth it supports. One cannot create a window of any other depth.

Related

What does bit per color channel means?

Determine the number of bytes necessary to store an uncompressed RGB color image of size 640 ×
480 pixels using 8, 10, 12, and 14 bits per color channel?
I know how to calculate the size of image by using Size = (rows * columns * bpp) but i cannot understand what bit per color channel means in this question
Bits per color channel, is the number of bits that are used for storing a color component of a single pixel.
RGB color space has 3 channels: Red, Green and Blue.
The "bits per color channel" (bpc) is the number of bits that are used for storing each component (e.g 8 bits for red, 8 bits for green, 8 bits for blue).
The dynamic range of 8 bits is [0, 255] (255 = 2^8-1).
8 bpc applies 24 bits per pixel (bpp).
The number of bits per pixel defines the Color Depth of the image.
24 bpp can represent 2^24 = 16,777,216 different colors.
More bits applies larger range: 12 bits range is [0, 4095] (4095 = 2^12-1), and much larger colors variety can be coded in each pixel.
12 bpc applies 36 bpp, and can represent 2^36 = 68,719,476,736 different colors.
For more information refer to BIT DEPTH TUTORIAL
Remark: The bits per channel is not directly related to memory storage (e.g. it's common to store 12 bit in 2 bytes [16 bits] in memory).
As you probably know, an image is built as a matrix of pixels.
Following figure illustrates the structure of an RGB image:
Following figure illustrates a pixel with 8 bits per color channel:
Following figure illustrates a pixel with 10 bits per color channel:
Following figure illustrates a pixel with 12 bits per color channel:
There subject is much wider than that, but I think that's enough...

How to convert RGB code to 8 simple intervals (possible ?)

I'm working on my Final Bachelor Project in Computer Science and for now I'm in a dead end.
Here is what I got stuck on:
I'm trying to classify any color (rgb code) in any of 8 (eight) simple colors.
In short terms I need to find 8 intervals where any colour can be placed and be considered a basic color (red, blue, green, black, yellow, purple, grey, brown ).
example: (18,218,23) to be classified as "green"
(81,,214,85) also "green"
but
(15,52,16) needs to be "black"
(110,117,110) needs to be "grey"
So there are 256 x 256 x 256 possible colors and I need to divide them in 8 (intervals) basic colors.
I'm waiting for some suggestions.
Cheers !
To be clear (as I've seen in comments) I'm looking for a particular set of 8 colors (red, black, green, brown, blue, purple, grey, yellow). Sorry for the orange above !
Don't do this in RGB, convert to a more convenient color space HSV is probably easiest - then the 8 "colors" are simply 8 intervals along the Hue axis.
Based on your example, I'd start with determining whether all components are approximately the same, or does on stand out. If they are about the same, then decide if the values are small enough to be black or not, then it's grey. If one value is different from the other two, then it is easy to check which is different and pick one of six possible colors accordingly.
Alternatively, set each component to either 0 or 1 according to a threshold, then you have 8 combinations to map to 8 colors.
threshold = 100:
(18,218,23) -> (0, 1, 0) - to be classified as "green"
(81,214,85) -> (0, 1, 0) - also "green"
(15,52,16) -> (0, 0, 0) - to be "black"
(110,117,110) -> (1, 1, 1) - to be "grey"
A simple solution is to do the distance from a defined points that you have labeled as a specific color. This is not a full proof solution, however it is an easy solution, and should work decently.
Long answer: Color spaces are annoying and don't translate well to labels.
RGB is horrible for this kind of thing, so your first step should be converting the color to a Lab color space. You can find many open implementations of this conversion online. Once you have your Lab color, everything becomes very simple. The three values become coordinates in three dimensions which you can easily section according to color - check out the images in the wikipedia link.
Don't know too much about Color scheme but calculating Euclidean distance would be a good solution.

Difference between GL_UNSIGNED_SHORT_4_4_4_4 and GL_UNSIGNED_SHORT_5_6_5 in OpenGL ES

Does anybody know the difference between GL_UNSIGNED_SHORT_4_4_4_4 and GL_UNSIGNED_SHORT_5_6_5 data types in OpenGL ES?
They are both 16 bit textures.
When using 32 bit texture you have 8 bit for each of the color components plus 8 bit for alpha, maximum quality and alpha control, so 8888.
With 16 bit there's always a compromise, if you only need color and not alpha then use the 565. Why 565 ? Because 16 bits can't divide evenly by 3 and our eyes are better at seing in the green spectrum, so better quality by giving the leftover bit to green.
If you need alpha but your images don't use gradients in alpha use 5551, 5 bits for each color 1 bit for alpha.
If you image has some gradient alpha then you can can use 4444, 4 bits for each color and 4 bits for alpha.
4444 has the worst color quality but it retains some alpha to mix, I use this for my font textures, for example, lighter than 32 bit and since the fonts are monocromatic they fit well in 4 bits.
I am not an OpenGL expert but :
GL_UNSIGNED_SHORT_4_4_4_4 stands for GL_UNSIGNED_SHORT_R_G_B_A where each RGBA values can have a value of 4 bit each (well that is 2^4)
GL_UNSIGNED_SHORT_5_6_5 stands for GL_UNSIGNED_SHORT_R_G_B. You can see that their is no Alpha value available here so that is a major difference. RGB values can also have greater values since they are 5 6 and 5 bits respectively.
Well, when using GL_UNSIGNED_SHORT_4_4_4_4 as type in a pixel specification command (glTexImage2D or glReadPixels), the data is assumed to be laid out in system memory as one 16bit value per-pixel, with individual components each taking up 4 consecutive bits. It can only be used with a format of GL_RGBA.
Whereas GL_UNSIGNED_SHORT_5_6_5 also assumes individual pixels as 16bit values, but with the red and blue components taking up 5 bits each and the green component having 6 bits (there is no alpha channel). It can only be used with a format of GL_RGB.

Is there any way to divide rgb color palette?

I'm trying to generate a color palette which has 16 colors.
i will display this palette in 4x4 grid.
so i have to find a way to rgb color palette which has
255*255*255 colors divided to 16 colors equally and logically.
i think it's gonna be a mathematical algorithm.
because i'm tring to pick 16 vectors from
3x3 matrix which picked in equal scale.
actually i have found a way depends on this "dividing color palette" problem.
i will use this color values with converting rgb values to hsv values.
hue, saturation, value
so i can use one integer value between 0-360 or i can use one integer between 0-100 (%) for my color palette.
finally, i can easily use this values for searhing/filtering my data based on color selection. i'm diving 0-360 range to 16 pices equally, so i can easily define 16 different colors.
but thanks for different approaches
You are basically projecting a cube (R X G X B) onto a square (4 X 4). First, I would start by asking myself what size cube fits inside that square.
1 X 1 X 1 = 1
2 X 2 X 2 = 8
3 X 3 X 3 = 27
The largest cube that fits in the square has 8 colors. At that point, I would note how conveniently 8 is an integral factor of 16.
I think the convenience would tempt me to use 8 basic colors in 2 variants like light and dark or saturated and unsaturated.
You can approach this as a purely mathematical equipartition problem, but then it isn't really about color.
If you are trying to equipartition a color palette in a way that is meaningful to human perception, there are a large number of non-linearities that need to be taken into account which this article only mentions. For example, the colors #fffffe, #fffeff, and #feffff occupy far corners of the mathematical space, but are nearly indistinguishable to the human eye.
When the number of selected colors (16) is so small (especially compared to the number of available colors), you'll be much better off hand-picking the good-looking palette or using a standard one (like some pre-defined system or Web palette for 16 color systems) instead of trying to invent a mathematical algorithm for selecting the palette.
A lot depends on what the colors are for. I you just want 16 somewhat arbitrary colors, I would suggest:
black darkgray lightgray white
darkred darkgreen darkblue darkyellow
medred medgreen medblue medyellow
lightred lightgreen lightblue lightyellow
I used that color set for a somewhat cartoonish-colored game (VGA) and found it worked pretty well. I think I sequenced the colors a little differently, but the sequence above would seem logical if arranged in a 4x4 square.
This is a standard problem and known as color quantization.
There are a couple of algorithms for this:
Objective: You basically want to make 16 clusters of your pixel in a 3 dimension space where the 3 axes varies from 0 to 255.
Methods are:
1) rounding of first significant bits. -- very easy to implement but does not give good result.
2) histogram method. - take median effort and give better result
3) quad tree. - state of the art data structure. Give best result but implementing the qaud tree data structure is hard.
There might be some more algorithms. But I have used these 3.
Start with the color as an integer for obvious math (or start with hex if you can think in base 16). Add to the color the number for each desired sample. Convert the color integer to hex, and then split the hex to RGB. In this code example the last color will be within the number of divisions to hex white (0xffffff).
# calculate color sample sizes
divisions = 16 # number of desired color samples
total_colors = 256**3-1
color_samples = int((total_colors) / divisions)
print('{0:,} colors in {1:,} parts requires {2:,} per step'.format(total_colors, divisions , color_samples))
# loop to print results
ii = 0
for io in range(0,total_colors,color_samples):
hex_color = '{0:0>6}'.format(hex(io)[2:])
rc = hex_color[0:2] # red
gc = hex_color[2:4] # blue
bc = hex_color[4:6] # green
print('{2:>5,} - {0:>10,} in hex {1} | '.format(io, hex_color, ii), end='')
print('r-{0} g-{1} b-{2} | '.format(rc, gc, bc), end='')
print('r-{0:0>3} g-{1:0>3} b-{2:0>3}'.format(int(rc,16), int(gc,16), int(bc,16)))
ii +=1

Compressing/packing "don't care" bits into 3 states

At the moment I am working on an on screen display project with black, white and transparent pixels. (This is an open source project: http://code.google.com/p/super-osd; that shows the 256x192 pixel set/clear OSD in development but I'm migrating to a white/black/clear OSD.)
Since each pixel is black, white or transparent I can use a simple 2 bit/4 state encoding where I store the black/white selection and the transparent selection. So I would have a truth table like this (x = don't care):
B/W T
x 0 pixel is transparent
0 1 pixel is black
1 1 pixel is white
However as can be clearly seen this wastes one bit when the pixel is transparent. I'm designing for a memory constrained microcontroller, so whenever I can save memory it is good.
So I'm trying to think of a way to pack these 3 states into some larger unit (say, a byte.) I am open to using lookup tables to decode and encode the data, so a complex algorithm can be used, but it cannot depend on the states of the pixels before or after the current unit/byte (this rules out any proper data compression algorithm) and the size must be consistent; that is, a scene with all transparent pixels must be the same as a scene with random noise. I was imagining something on the level of densely packed decimal which packs 3 x 4-bit (0-9) BCD numbers in only 10 bits with something like 24 states remaining out of the 1024, which is great. So does anyone have any ideas?
Any suggestions? Thanks!
In a byte (256 possible values) you can store 5 of your three-bit values. One way to look at it: three to the fifth power is 243, slightly less than 256. The fact that it's slightly less also shows that you're not wasting much of a fraction of a bit (hardly any, either).
For encoding five of your 3-bit "digits" into a byte, think of taking a number in base 3 made from your five "digits" in succession -- the resulting value is guaranteed to be less than 243 and therefore directly storable in a byte. Similarly, for decoding, do the base-3 conversion of a byte's value.

Resources