How to align two figures with different labels? - matlab-figure

These are two figures in my paper. Apparently, the two axes are not in alignment. How can I make sure that two figures generated independently align with each other? The problem is that the two figures have different labels and different ticklabels. That makes the positions of the axes different.

For each subplot, find the x/y/width/height. Then adjust the second (or both) to be the same height and the same "y"
get(gca,'position')
set(gca, 'position', [0.1300 0.15 0.3347 0.3412]) %x y width height, dummy numbers

Related

Labeling plots with images in Matlab

Is there any way of labeling plots with images. For example, when I use the following:
plot(Y(:,1),Y(:,2),'o','LineWidth',2);
gname(names)
I can label each dot in a plot with a name. Is there any way to insert images instead of names?
It is possible, but not as convenient as gname by far. You can use the low-level version of image to insert images in your plot at arbitrary positions. Here's a simple example which puts the "Mandrill" image that comes with Matlab with its upper left corner pixel at the position (pi/2, 0):
% example plot
x = linspace(0, 2*pi, 100);
plot(x, cos(x))
% insert image
load mandrill
colormap(map)
image('CData', X, 'XData', [pi/2, pi/2 + 0.5], 'YData', [0, -0.3])
The result looks like this:
Problems with this approach:
There is no interactive point-and-click facility, you have to explicitly insert and position the image labels programmatically, or program such a point-and-click facility yourself. ginput might help doing so.
A figure window can only have one associated color map. That means if you have different images, they either all have to use the same colormap or have to be truecolor images.
Not just the position, but also the display size of the image has to be specified in the call to image, and both are by default specified with respect to the plot's coordinate system. This makes it hard to achieve the correct aspect ratio. You can switch (temporarily) to absolute units using the axes property 'Units' , but then you have to figure out the correct position in e.g. absolute millimeters or inches. Moreover, images are usually indexed with vertical coordinates increasing from top to bottom, while plots usually have vertical coordinates increasing from bottom to top. This is the reason for the negative value -0.3 in the 'YData' property above.
Alternatively, you can insert images each in their own little axes sitting on top of the plot's axes, which makes it easy to get the right orientation and aspect ratio using axis image. You'll still have the problem though to figure out the correct position for the axes.

I have a true colored image, how to select the 256 colors from? [duplicate]

I am playing with computer graphics programming for the first time. I want to convert RGB (24-bit) images to indexed-palette (8-bit) images (like GIF). My initial thought is to use k-means (with k=256).
How would one go about picking the optimal palette for a given image? This is a learning experience for me, so I would prefer an overview-type answer to source code.
Edit: Dithering is currently off-topic. I am only referring to "simple" color conversion, psycho-visual/perceptual models aside; color-space is also currently off-topic, though moving between color-spaces is what got me thinking about this in the first place :)
http://en.wikipedia.org/wiki/Color_quantization
Octree
Median-cut
K-means
Gamut subdivision
http://www.cs.berkeley.edu/~dcoetzee/downloads/scolorq/
The reference links people have provided are good, and there are several solutions to this problem, but since I've been working on this problem recently (with complete ignorance as to how others have solved it), I offer my approach in plain English:
Firstly, realize that (human perceived) color is 3-dimensional. This is fundamentally because the human eye has 3 distinct receptors: red, green, and blue. Likewise your monitor has red, green, and blue pixel elements. Other representations, like, hue, saturation, luminance (HSL) can be used, but basically all representations are 3-dimensional.
This means RGB color space is a cube, with red, green, and blue axes. From a 24-bit source image, this cube has 256 discrete levels on each axis. A naive approach to reducing the image to 8-bit is to simply reduce the levels per axis. For instance, an 8x8x4 cube palette with 8 levels for red and green, 4 levels for blue is easily created by taking the high 3 bits of the red and green values, and the high 2 bits of the blue value. This is easy to implement, but has several disadvantages. In the resulting 256 color palette, many entries will not be used at all. If the image has detail using very subtle color shifts, these shifts will disappear as the colors snap into the same palette entry.
An adaptive palette approach needs to account for not just averaged/common colors in the image, but which areas of color space have the greatest variance. That is, an image that has thousands of subtle shades of light green requires a different palette than an image that has thousands of pixels of exactly the same shade of light green, since the latter would ideally use a single palette entry for that color.
To this end, I took an approach that results in 256 buckets containing exactly the same number of distinct values each. So if the original image contained 256000 distinct 24-bit colors, this algorithm results in 256 buckets each containing 1000 of the original values. This is accomplished by binary spatial partitioning of color space using the median of distinct values present (not the mean).
In English, this means we first divide the whole color cube into the half of pixels with less than the median red value and the half with more than the median red value. Then, divide each resulting half by green value, then by blue, and so on. Each split requires a single bit to indicate the lower or higher half of pixels. After 8 splits, variance has effectively been split into 256 equally important clusters in color space.
In psuedo-code:
// count distinct 24-bit colors from the source image
// to minimize resources, an array of arrays is used
paletteRoot = {colors: [ [color0,count],[color1,count], ...]} // root node has all values
for (i=0; i<8; i++) {
colorPlane = i%3 // red,green,blue,red,green,blue,red,green
nodes = leafNodes(paletteRoot) // on first pass, this is just the root itself
for (node in nodes) {
node.colors.sort(colorPlane) // sort by red, green, or blue
node.lo = { colors: node.colors[0..node.colors.length/2] }
node.hi = { colors: node.colors[node.colors.length/2..node.colors.length] }
delete node.colors // free up space! otherwise will explode memory
node.splitColor = node.hi.colors[0] // remember the median color used to partition
node.colorPlane = colorPlane // remember which color this node split on
}
}
You now have 256 leaf nodes, each containing the same number of distinct colors from the original image, clustered spatially in the color cube. To assign each node a single color, find the weighted average using the color counts. The weighting is an optimization that improves perceptual color matching, but is not that important. Make sure to average each color channel independently. The results are excellent. Note that it is intentional that blue is divided once less than red and green, since the blue receptors in the eye are less sensitive to subtle changes than red and green.
There are other optimizations possible. By using HSL you could instead put the higher quantizing in the luminance dimension instead of blue. Also the above algorithm will slightly reduce overall dynamic range (since it ultimately averages color values), so dynamically expanding the resulting palette is another possibility.
EDIT:
updated to support palette of 256 colors
If you need simplest method then I would suggest histogram based approach:
Calculate histograms of R/G/B channels
Define 4 intensity ranges
For each channel in intensity range
Split histogram into 4 equal parts
For each histogram part
Extract most frequent value of that part
Now you will have 4*4^3=256 colors palette. When assigning pixel to palette color, just calculate average intensity of pixel to see what intensity region you must use. After that just map one of those 64 colors of intensity region to pixel value.
Good luck.
This might be a little late to answer, but try this:
make a set of each color in the image,
sort them according to red, then green then blue (if preceding channels are equal), (they are now into a list)
suppress nearby colors if they are too similar, ie distance in rgb space is smaller than 4.
if they are still to much colors, suppress the least used ones.
Each time you supress a color, you will have to add into a hash_map the colors and it's destination.

Display image at desired scale across subplot axes

Is it possible to display an image in multiple subplot axes, such that the image appears at the desired scale?
subplot(3,3,[1 4 7]);
%# image scaled down to fit 1 set of axes
imshow(img);
subplot(3,3,2);
plot(relevantData);
%# And so on with 5 other plots
I want to have the image scaled to either a fixed size or to fit the axes available to it, rather than to the size of a single axes.
My use case is to show a video alongside plots derived from the video, such that the plots are progressively drawn in step with the video. Once the display is correct I can save each image and combine them into a video.
Clarification
I am asking if it is possible to produce a figure as described without specifying the position of every element in absolute terms. Though one can make arbitrary figures that way (and in fact I have done so for this project), it is very tedious.
Edit:
For changing the size of the subplot:
In help subplot they mention that you can set parameters on the selected "axes" (that's what they call a plotting area in Matlab).
Using that, you can set the 'position', as seen in help axes. This property takes takes as argument:
[left, bottom, width, height]
As pointed out by #reve_etrange, one should use absolute positioning for axes 'Position'and 'OuterPosition' parameters. they can be in normalized coordinates, though.
For changing the size of the image in the subplot:
I think there are 2 useful things for you in the help imshow output:
'InitialMagnification': setting the magnification of the image.
'Parent': determines which parent imshow will use to put the image in (never tried using imshow with subplots).

How to get a 1 pixel line with NSBezierPath?

I'm developing a custom control. One of the requirements is to draw lines. Although this works, I noticed that my 1 pixel wide lines do not really look like 1 pixel wide lines - I know, they're not really pixels but you know what I mean. They look more like two or three pixels wide. This becomes very apparent when I draw a dashed line with a 1 pixel dash and a 2 pixel gap. The 1 pixel dashes actually look like tiny lines in stead of dots.
I've read the Cocoa Drawing documentation and although Apple mentions the setLineWidth method, changing the line width to values smaller than 1.0 will only make the line look more vague and not thinner.
So, I suspect there's something else influencing the way my lines look.
Any ideas?
Bezier paths are drawn centered on their path, so if you draw a 1 pixel wide path along the X-coordinate, the line actually draws along Y-coordinates { -0.5, 0.5 } The solution is usually to offset the coordinate by 0.5 so that the line is not drawn in the sub pixel boundaries. You should be able to shift your bounding box by 0.5 to get sharper drawing behavior.
Francis McGrew already gave the right answer, but since I did a presentation on this once, I thought I'd add some pictures.
The problem here is that coordinates in Quartz lie at the intersections between pixels. This is fine when filling a rectangle, because every pixel that lies inside the coordinates gets filled. But lines are technically (mathematically!) invisible. To draw them, Quartz has to actually draw a rectangle with the given line width. This rectangle is centered over the coordinates:
So when you ask Quartz to stroke a rectangle with integral coordinates, it has the problem that it can only draw whole pixels. But here you see that we have half pixels. So what it does is it averages the color. For a 50% black (the line color) and 50% white (the background) line, it simply draws each pixel in grey:
This is where your washed-out drawings come from. The fix is now obvious: Don't draw between pixels, and you achieve that by moving your points by half a pixel, so your coordinate is centered over the desired pixel:
Now of course just offsetting may not be what you wanted. Because if you compare the filled variant to the stroked one, the stroke is one pixel larger towards the lower right. If you're e.g. clipping to the rectangle, this will cut off the lower right:
Since people usually expect the rectangle to stroke inside the specified rectangle, what you usually do is that you offset by 0.5 towards the center, so the lower right effectively moves up one pixel. Alternately, many drawing apps offset by 0.5 away from the center, to avoid overlap between the border and the fill (which can look odd when you're drawing with transparency).
Note that this only holds true for 1x screens. 2x Retina screens actually exhibit this problem differently, because each of the pixels below is actually drawn by 4 Retina pixels, which means they can actually draw the half-pixels. However, you still have the same problem if you want a sharp 0.5pt line. Also, since Apple may in the future introduce other Retina screens where e.g. every pixel is made up of 9 Retina pixels (3x), or whatever, you should really not rely on this. Instead, there are now API calls to convert rectangles to "backing aligned", which does this for you, no matter whether you're running 1x, 2x, or a fictitious 3x.
PS - Since I went to the hassle of writing this all up, I've put this up on my web site: http://orangejuiceliberationfront.com/are-your-rectangles-blurry-pale-and-have-rounded-corners/ where I'll update and revise this description and add more images.
The answer is (buried) in the Apple Docs:
"To avoid antialiasing when you draw a one-point-wide horizontal or vertical line, if the line is an odd number of pixels in width, you must offset the position by 0.5 points to either side of a whole-numbered position"
Hidden in Drawing and Printing Guide for iOS: iOS Drawing Concepts, though nothing that specific to be found in the current, standard (OS X) Cocoa Drawing Guide..
As for the effects of invoking setDefaultLineWidth: the docs also state that:
"A width of 0 is interpreted as the thinnest line that can be rendered on a particular device. The actual rendered line width may vary from the specified width by as much as 2 device pixels, depending on the position of the line with respect to the pixel grid and the current anti-aliasing settings. The width of the line may also be affected by scaling factors specified in the current transformation matrix of the active graphics context."
I found some info suggesting that this is caused by anti aliasing. Turning anti aliasing off temporarily is easy:
[[NSGraphicsContext currentContext] setShouldAntialias: NO];
This gives a crisp, 1 pixel line. After drawing just switch it on again.
I tried the solution suggested by Francis McGrew by offsetting the x coordinate with 0.5, however that did not make any difference to the appearance of my line.
EDIT:
To be more specific, I changed x and y coordinates individually and together with an offset of 0.5.
EDIT 2:
I must have done something wrong, as changing the coordinates with an offset of 0.5 actually does work. The end result is better than the one obtained by switching off the anti aliasing so I'll make Francis MsGrew's answer the accepted answer.

Bresenham line algorithm (thickness)

I was wondering if anyone knew of any algorithm to draw a line with specific thickness, based on Bresenham's line algorithm or any similar.
On a second thought, I've been wondering about for each setPixel(x,y) I'd just draw a circle, e.g.:
filledCircle(x,y,thickness); for every x,y but that would of course be very slow. I also tried to use dictionary but that would fill the memory in no time. Check the pixels I'm about to draw on if they have the same color, but that's also not efficient enough for large brushes.
Perhaps I could somehow draw half circles depending on the angle?
Any input would be appreciated.
Thanks.
duplicate: how do I create a line of arbitrary thickness using Bresenham?
You cannot actually draw circles along the line. This approach is patented. :)
You can still read patent for inspiration.
I don't know what is commonly used, but it seems to me that you could use Bresenham for the 1-pixel-wide line, but extend it a set number of pixels vertically or horizonally. For instance, suppose your line is roughly 30 degrees away from the horizontal, and you want it to be four pixels wide. You calculate that the vertical thickness of the line should be five pixels. You run Bresenham, but for each pixel (x,y), you actually draw (x,y), (x,y+1), ... (x,y+4). And if you want the ends of the line to be rounded, draw a circle at each end.
For overkill, make a pixel map of the stylus (a circle or diagonal nib, or whatever), then draw a set of parallel Bresenham lines, one for each pixel in the stylus.
There are variations on Bresenhams which calculate pixel coverage, such as those used in the anti-grain geometry libraries; whether you want something that quality - you don't say what the output medium is, and most systems more capable than on-off LCDS support pens with thickness anyway.

Resources