d3 log scale: base not making an effect? - d3.js

I feel like this may be simply me misunderstanding how logarithmic scales work, but I can't seem to get D3 log scale bases to work.
In this fiddle, I attempt to create three scales with the same set of base10 ticks, with a base10 log scale, a base4 log scale, and a base2 log scale. According to my layperson understanding of logarithmic scales, the base10 log scale looks correct — the powers of ten are equidistantly spaced on the axis. But, the base4 and base2 scales are identical — it seems to me that the labels should compress to the right on those two scales. What's going on?
Fiddly code:
ticks = [1, 10, 100, 1000, 10000, 100000, 1000000, 10000000]
elems = { ten: 10, four: 4, two: 2 }
for selector, base of elems
dom = d3.select('.scale.' + selector)
scale = (new d3.scale.log()).base(base).domain([ 0.5, 10000000 ]).range([ 0, 500 ])
dom.selectAll('div')
.data(ticks)
.enter().append('div')
.text((x) -> x)
.style('left', (x) -> scale(x) + 'px')

The base() method is a bit of a misnomer, IMO. It is actually a displayBase method, in that it only updates the positioning and label of the ticks.
Scales use the domain and range settings to do their magic. This is a nicely consistent way of doing things across many different scales, but with logs it is a little unintuitive.
The awkward way: In your case, modify the upper value of the domain array. For example, if the range is [0, 1000], then the domain should be [1, 3] for log-base10, and [0, 9.9658 (approx)] for log-base2. In general, top domain value is the log(base-x) of the top range value. Ditto with the bottom range value if it is higher than 0.
The simpler way: Leave the range as [0,1]. Then you can set the domain as [1, base]. The scale takes care of interpolation outside the range values.

Related

Mapping one continuous data range to another nonlinearly

Sorry about the vague title. I'm not sure how to concisely word what I'm about to ask. This is more of a math/algorithms question than a programming question.
In an app that I'm developing, we have a value that can fluctuate anywhere between 0 and a predetermined maximum (in testing it's usually hovered around 100, so let's just say 100). This range of data is continuous, meaning there are an infinite number of possible values- as long as it's between 0 and 100, it's possible.
Right now, any value returned from this is mapped to a different range that is also continuous- from 1000 to 200. So if the value from the first set is 100, I map it to 200, and if the value from the first set is 0, it gets mapped to 1000. And of course everything in between. This is what the code looks like:
-(float)mapToRange:(float)val withMax:(float)maxVal{
// Establish range constants.
const int upperBound = 1000;
const int lowerBound = 200;
const int bandwidth = upperBound - lowerBound;
// Make sure we don't go above the calibrated maximum.
if(val > maxVal)
val = maxVal;
// Scale the original value to our new boundaries.
float scaled = val/maxVal;
float ret = upperBound - scaled*bandwidth;
return ret;
}
Now, what I want to do is make it so that the higher original values (closer to 100) increase in larger increments than the lower original values (closer to 0). Meaning if I slowly start decreasing from 100 to 0 at a steady rate, the new values starting at 200 move quickly toward 1000 at first but go in smaller increments the closer they get to 1000. What would be the best way to go about doing this?
Your value scaled is basically the 0-100 value represented in the range 0-1 so it's good to work with. Try raising this to an integer power, and the result will increase faster near 1 and slower near 0. The higher the power, the larger the effect. So something like:
float scaled = val/maxVal;
float bent = scaled*scaled*scaled*scaled; // or however you want to write x^4
float ret = upperBound - bent*bandwidth;
Here's a sketch of the idea:
That is, the span A to B, maps to the smaller span a to b, while the span C to D maps to the larger span c to d. The larger the power of the polynomial, the more the curve will be bent into the lower right corner.
The advantage of using the 0 to 1 range is that the endpoints stay fixed since x^n=x when x is 0 or 1, but this, of course, isn't necessary as anything could be compensated for by the appropriate shifting and scaling.
Note also that this map isn't symmetric (though my drawing sort of looks that way), though course a symmetric curve could be chosen. If you want to curve to bend the other way, choose a power less than 1.

Algorithm to make overly bright (HDR) colours become white?

You know how every colour eventually turns white in an image if it's bright enough or sufficiently over-exposed? I'm trying to figure out a function to do this to apply to generated HDR images, in a realistic and pleasing looking way (using idealised camera performance as a reference I guess).
The problem the algorithm/function I want to obtain should solve is, let's say you have an orange pixel with the (linear RGB) values {1.0, 0.2, 0.0}. Everything is fine if you multiply each value by a factor of 1.0 or less, but let's say you multiply that pixel by 6, now you get {6.0, 1.2, 0.0}, what do you do with your out of range red and green value of 6.0 and 1.2? You could clip them which would give you {1.0, 1.0, 0.0}, which sadly is what Photoshop and 3DS Max seem to do, but it looks so very wrong as now your formerly orange pixel is yellow (so if you start with any saturated hue (meaning at least one channel is 0.0) you always end up with either magenta, yellow or cyan) and it will never become white.
I considered taking half of the excess of one channel and splitting it equally between the other channels, so for example {1.6, 0.5, 0.1} would become {1.0, 0.8, 0.4} but it's too simplistic and not very realistic. I strongly doubt that an acceptable solution could be anywhere near this trivial.
I'm sure there must have been research done on the topic, but I cannot find any relevant literature and sensitometry doesn't seem to be quite what I'm looking for.
Modifying the Python code I left in an answer on another question to work in the range [0.0-1.0]:
def redistribute_rgb(r, g, b):
threshold = 1.0
m = max(r, g, b)
if m <= threshold:
return r, g, b
total = r + g + b
if total >= 3 * threshold:
return threshold, threshold, threshold
x = (3 * threshold - total) / (3 * m - total)
gray = threshold - x * m
return gray + x * r, gray + x * g, gray + x * b
This should return acceptable results in either a linear or gamma-corrected color space, although linear will be better.
Multiplying each r,g,b value by the same amount retains their original proportions and thus the hue, up to the point where x=0 and you've achieved white. You've expressed interest in a non-linear response once clipping starts, but I'm not entirely sure how to work that in. The math was carefully chosen so that at least one of the returned values will be at the threshold, and none will be above.
Running this on your example of (1.6, 0.5, 0.1) returns (1.0, 0.6615, 0.5385).
I've found a way to do it based on Mark Ransom's suggestion with a twist. When the colour is out of gamut we compute the grey colour of equivalent perceptual luminosity then we linearly interpolate between the out-of-gamut input colour and that grey value to find the first in-gamut colour. Weighting each RGB channel to get the perceptual luminosity part is the tricky part seeing as the most commonly used formula from CIELab L = 0.2126*red + 0.7152*green + 0.0722*blue is quite blatantly wrong as it makes the blue way too bright. Instead I did some tests and chose the weights which looked the most correct to me, though these are not definite and you might want to tweak them, although for this particular problem this is perhaps not too crucial.
Or in fewer words the solution is to desaturate the out-of-gamut colour just enough that it might be in-gamut.
Here is my solution in C code. All variables are in floating point format.
Wr=0.125; Wg=0.68; Wb=0.195; // these are the weights for each colour
max = MAXN(MAXN(red, grn), blu); // max is the maximum value of the 3 colours
if (max > 1.) // if the colour is out of gamut
{
L = Wr*red + Wg*grn + Wb*blu; // Luminosity of the colour's grey point
if (L < 1.) // if the grey point is no brighter than white
{
// t represents the ratio on the line between the input colour
// and its corresponding grey point. t is between 0 and 1,
// a lower t meaning closer to the grey point and a
// higher t meaning closer to the input colour
t = (1.-L) / (max-L);
// a simple linear interpolation between the
// input colour and its grey point
red = red*t + L*(1.-t);
grn = grn*t + L*(1.-t);
blu = blu*t + L*(1.-t);
}
else // if it's too bright regardless of saturation
{
red = grn = blu = 1.;
}
}
Here's what it looks like with a linear orange gradient:
It does not use anything like arbitrary gamma which is good, the only mostly arbitrary thing has to do with the Luminosity weights, but I guess those are quite necessary.
You have to map it to some non-linear scale. For example: http://en.wikipedia.org/wiki/Gamma_correction .
Ex: Let y = f(x) = log(1+x) - log(1-x) define the "actual" luminescence.
The reverse function is x = g(y) = (e^y-1)/(e^y+1).
now, you have values x=1 and x=0.2. For the first case the corresponding y is infinity. Six times the infinity is still infinity. If you use function g, you get new x_new = 1.
For x=0.2, y = 0.4054651. After multiplying by 6, y_new = 2.432791 . The corresponding x_new = 0.8385876.
For x=0, x_new will still be 0 (I will leave the calculations to you).
So starting from (1.0, 0.2, 0.0) your new set of points are (1.0, 0.8385876, 0.0).
This is one example of mapping function. There are infinite number of them. Choose one that looks best to you.

Find frequency for non-binned, weighted data

Here is a tricky problem (or at least so I think). I need to create a histogram, but instead of having the data and it's frequency, I have repeated data (i.e. not binned) and some weight for each data.
One example:
Angle | Weight
90 .... 3/10
93 .... 2/10
180 .... 2/10
180 .... 1/10
95 .... 2/10
I want to create a histogram with bin size 10. The y-values should be the sum of weighted frequencies for angles within a range. How can I do it? Preferably Mathematica or pseudocode...
In Mathematica 9, you can do it using the WeightedData function like this:
Histogram[WeightedData[{90, 93, 180, 180, 95}, {3/10, 2/10, 2/10, 1/10, 2/10}], {10}]
You should then get a graphic like this one:
Since the expected output is not forthcoming I shall adopt Verbeia's interpretation. You might use something like this:
dat = {{90, 3/10}, {93, 1/5}, {180, 1/5}, {180, 1/10}, {95, 1/5}};
bars =
Reap[
Sow[#2, Floor[#, 10]] & ### dat,
_,
{#, Tr##2} &
][[2]]
Graphics[
Rectangle[{#, 0}, {# + 10, #2}] & ### bars,
AspectRatio -> 1/GoldenRatio,
Axes -> True,
AxesOrigin -> {Min#bars[[All, 1]], 0}
]
I did something similar for a different kind of question recently (weighting by balance sheet size).
Assuming your data is in an N * 2 matrix list, I would do something like:
{numbers,weights} = {data[[All,1]], data[[All,2]]*10};
weightednumbers = Flatten# MapThread[
Table[#1, {#2}] &, {numbers, Ceiling[weights]}];
And then use Histogram to draw the histogram on this transformed data.
There might be other ways but this works.
An important point is to make sure the weights are integers, so the Table as the correct iterator. This might require defining weights as data[[All,2]]*Min[data[[All,2]].

Rounding (flooring) numbers to asymmetric resolution

I am building a variable resolution ruler control for a data visualization program. I want it to show axes tick labels depending on zoom level.
For exemple, with a very wide zoom, it would plot only [100, 200, 300] tick labels. Then if I close zoom, it would show, say, [10, 15, 20, 25] labels.
The numbers would always be multiple of 5 or 10. So, a partial "rounding grid" would be [1, 5, 10, 50, 100, 500, 1000, 5000], but that would grow infinitely for both sides (including decimal numbers.
The question is: given an arbitrary positive floating point number, how could I 'floor' it to the first smaller number in the grid?
(This would allow me to set the granularity of the tick labels according to the plotter's zoom level/scale).
Mind that:
The grid is "infinite" (that's why I didn't use sorting and other list methods);
There is no fixed set of values to the floating point number, except it's always positive and non-zero.
(EDIT) The resolution IS NOT SYMMETRIC, the tick intervals are, say, [1,5,10,50,100,500,...], the increment multipliers being, alternately, 2 and 5!
Thanks for reading
If the most significant digit is<5, use 1 and (k-1) zeros where k is the length of the number, else use 1 and (k-1) zeros. The following function works.
import numpy as np
def truncate(x):
length = 0
if x>=1:
while x>=10:
length+=1
x/=10
else:
while x<1:
length-=1
x*=10
r = max(0,-length)
if x<5:
return round(np.power(10.0,length),r)
else:
return round(5*np.power(10.0,length),r)
use modulo operator.
it is % in many languages ^ means power
if tick is an integer, use:
newValue = value - value % tick
if tick is a float (or even an integer) use:
tickDecimals = decimalsOf(tick) // pseudo function to know how much decimals tick has
newValue = (value * 10 ^ tickDecimals - ((value * 10 ^ tickDecimals) % (tick * 10 ^ tickDecimals))) / 10 ^ tickDecimals // some parentheses are redundant

An algorithm to solve a simple(?) array problem

For this problem speed is pretty crucial. I've drawn a nice image to explain the problem better. The algorithm needs to calculate if edges of a rectangle continue within the confines of the canvas, will the edge intersect another rectangle?
We know:
The size of the canvas
The size of each rectangle
The position of each rectangle
The faster the solution is the better! I'm pretty stuck on this one and don't really know where to start.
alt text http://www.freeimagehosting.net/uploads/8a457f2925.gif
Cheers
Just create the set of intervals for each of the X and the Y axis. Then for each new rectangle, see if there are intersecting intervals in the X or the Y axis. See here for one way of implementing the interval sets.
In your first example, the interval set on the horizontal axis would be { [0-8], [0-8], [9-10] }, and on the vertical: { [0-3], [4-6], [0-4] }
This is only a sketch, I abstracted many details here (e.g. usually one would ask an interval set/tree "which intervals overlap this one", instead of "intersect this one", but nothing not doable).
Edit
Please watch this related MIT lecture (it's a bit long, but absolutely worths it).
Even if you find simpler solutions (than implementing an augmented red-black tree), it's good to know the ideas behind these things.
Lines that are not parallel to each other are going to intersect at some point. Calculate the slopes of each line and then determine what lines they won't intersect with.
Start with that, and then let's see how to optimize it. I'm not sure how your data is represented and I can't see your image.
Using slopes is a simple equality check which probably means you can take advantage of sorting the data. In fact, you can probably just create a set of distinct slopes. You'll have to figure out how to represent the data such that the two slopes of the same rectangle are not counted as intersecting.
EDIT: Wait.. how can two rectangles whose edges go to infinity not intersect? Rectangles are basically two lines that are perpendicular to each other. shouldn't that mean it always intersects with another if those lines are extended to infinity?
as long as you didn't mention the language you chose to solve the problem, i will use some kind of pseudo code
the idea is that if everything is ok, then a sorted collection of rectangle edges along one axis should be a sequence of non-overlapping intervals.
number all your rectangles, assigning them individual ids
create an empty binary tree collection (btc). this collection should have a method to insert an integer node with info btc::insert(key, value)
for all rectangles, do:
foreach rect in rects do
btc.insert(rect.top, rect.id)
btc.insert(rect.bottom, rect.id)
now iterate through the btc (this will give you a sorted order)
btc_item = btc.first()
do
id = btc_item.id
btc_item = btc.next()
if(id != btc_item.id)
then report_invalid_placement(id, btc_item.id)
btc_item = btc.next()
while btc_item is valid
5,7,8 - repeat steps 2,3,4 for rect.left and rect.right coordinates
I like this question. Here is my try to get on it:
If possible:
Create a polygon from each rectangle. Treat each edge as an line of maximum length that must be clipped. Use a clipping algorithm to check weather or not a line intersects with another. For example this one: Line Clipping
But keep in mind: If you find an intersection which is at the vertex position, its a valid one.
Here's an idea. Instead of creating each rectangle with (x, y, width, height), instantiate them with (x1, y1, x2, y2), or at least have it interpret these values given the width and height.
That way, you can check which rectangles have a similar x or y value and make sure the corresponding rectangle has the same secondary value.
Example:
The rectangles you have given have the following values:
Square 1: [0, 0, 8, 3]
Square 3: [0, 4, 8, 6]
Square 4: [9, 0, 10, 4]
First, we compare Square 1 to Square 3 (no collision):
Compare the x values
[0, 8] to [0, 8] These are exactly the same, so there's no crossover.
Compare the y values
[0, 4] to [3, 6] None of these numbers are similar, so they're not a factor
Next, we compare Square 3 to Square 4 (collision):
Compare the x values
[0, 8] to [9, 10] None of these numbers are similar, so they're not a factor
Compare the y values
[4, 6] to [0, 4] The rectangles have the number 4 in common, but 0 != 6, therefore, there is a collision
By know we know that a collision will occur, so the method will end, but lets evaluate Square 1 and Square 4 for some extra clarity.
Compare the x values
[0, 8] to [9, 10] None of these numbers are similar, so they're not a factor
Compare the y values
[0, 3] to [0, 4] The rectangles have the number 0 in common, but 3 != 4, therefore, there is a collision
Let me know if you need any extra details :)
Heh, taking the overlapping intervals answer to the extreme, you simply determine all distinct intervals along the x and y axis. For each cutting line, do an upper bound search along the axis it will cut based on the interval's starting value. If you don't find an interval or the interval does not intersect the line, then it's a valid line.
The slightly tricky part is to realize that valid cutting lines will not intersect a rectangle's bounds along an axis, so you can combine overlapping intervals into a single interval. You end up with a simple sorted array (which you fill in O(n) time) and a O(log n) search for each cutting line.

Resources