I am building a variable resolution ruler control for a data visualization program. I want it to show axes tick labels depending on zoom level.
For exemple, with a very wide zoom, it would plot only [100, 200, 300] tick labels. Then if I close zoom, it would show, say, [10, 15, 20, 25] labels.
The numbers would always be multiple of 5 or 10. So, a partial "rounding grid" would be [1, 5, 10, 50, 100, 500, 1000, 5000], but that would grow infinitely for both sides (including decimal numbers.
The question is: given an arbitrary positive floating point number, how could I 'floor' it to the first smaller number in the grid?
(This would allow me to set the granularity of the tick labels according to the plotter's zoom level/scale).
Mind that:
The grid is "infinite" (that's why I didn't use sorting and other list methods);
There is no fixed set of values to the floating point number, except it's always positive and non-zero.
(EDIT) The resolution IS NOT SYMMETRIC, the tick intervals are, say, [1,5,10,50,100,500,...], the increment multipliers being, alternately, 2 and 5!
Thanks for reading
If the most significant digit is<5, use 1 and (k-1) zeros where k is the length of the number, else use 1 and (k-1) zeros. The following function works.
import numpy as np
def truncate(x):
length = 0
if x>=1:
while x>=10:
length+=1
x/=10
else:
while x<1:
length-=1
x*=10
r = max(0,-length)
if x<5:
return round(np.power(10.0,length),r)
else:
return round(5*np.power(10.0,length),r)
use modulo operator.
it is % in many languages ^ means power
if tick is an integer, use:
newValue = value - value % tick
if tick is a float (or even an integer) use:
tickDecimals = decimalsOf(tick) // pseudo function to know how much decimals tick has
newValue = (value * 10 ^ tickDecimals - ((value * 10 ^ tickDecimals) % (tick * 10 ^ tickDecimals))) / 10 ^ tickDecimals // some parentheses are redundant
Related
currently I'm needing a function which gives a weighted, random number.
It should chose a random number between two doubles/integers (for example 4 and 8) while the value in the middle (6) will occur on average, about twice as often than the limiter values 4 and 8.
If this were only about integers, I could predefine the values with variables and custom probabilities, but I need the function to give a double with at least 2 digits (meaning thousands of different numbers)!
The environment I use, is the "Game Maker" which provides all sorts of basic random-generators, but not weighted ones.
Could anyone possibly lead my in the right direction how to achieve this?
Thanks in advance!
The sum of two independent continuous uniform(0,1)'s, U1 and U2, has a continuous symmetrical triangle distribution between 0 and 2. The distribution has its peak at 1 and tapers to zero at either end. We can easily translate that to a range of (4,8) via scaling by 2 and adding 4, i.e., 4 + 2*(U1 + U2).
However, you don't want a height of zero at the endpoints, you want half the peak's height. In other words, you want a triangle sitting on a rectangular base (i.e., uniform), with height h at the endpoints and height 2h in the middle. That makes life easy, because the triangle must have a peak of height h above the rectangular base, and a triangle with height h has half the area of a rectangle with the same base and height h. It follows that 2/3 of your probability is in the base, 1/3 is in the triangle.
Combining the elements above leads to the following pseudocode algorithm. If rnd() is a function call that returns continuous uniform(0,1) random numbers:
define makeValue()
if rnd() <= 2/3 # Caution, may want to use 2.0/3.0 for many languages
return 4 + (4 * rnd())
else
return 4 + (2 * (rnd() + rnd()))
I cranked out a million values using that and plotted a histogram:
For the case someone needs this in Game Maker (or a different language ) as an universal function:
if random(1) <= argument0
return argument1 + ((argument2-argument1) * random(1))
else
return argument1 + (((argument2-argument1)/2) * (random(1) + random(1)))
Called as follows (similar to the standard random_range function):
val = weight_random_range(FACTOR, FROM, TO)
"FACTOR" determines how much of the whole probability figure is the "base" for constant probability. E.g. 2/3 for the figure above.
0 will provide a perfect triangle and 1 a rectangle (no weightning).
I feel like this may be simply me misunderstanding how logarithmic scales work, but I can't seem to get D3 log scale bases to work.
In this fiddle, I attempt to create three scales with the same set of base10 ticks, with a base10 log scale, a base4 log scale, and a base2 log scale. According to my layperson understanding of logarithmic scales, the base10 log scale looks correct — the powers of ten are equidistantly spaced on the axis. But, the base4 and base2 scales are identical — it seems to me that the labels should compress to the right on those two scales. What's going on?
Fiddly code:
ticks = [1, 10, 100, 1000, 10000, 100000, 1000000, 10000000]
elems = { ten: 10, four: 4, two: 2 }
for selector, base of elems
dom = d3.select('.scale.' + selector)
scale = (new d3.scale.log()).base(base).domain([ 0.5, 10000000 ]).range([ 0, 500 ])
dom.selectAll('div')
.data(ticks)
.enter().append('div')
.text((x) -> x)
.style('left', (x) -> scale(x) + 'px')
The base() method is a bit of a misnomer, IMO. It is actually a displayBase method, in that it only updates the positioning and label of the ticks.
Scales use the domain and range settings to do their magic. This is a nicely consistent way of doing things across many different scales, but with logs it is a little unintuitive.
The awkward way: In your case, modify the upper value of the domain array. For example, if the range is [0, 1000], then the domain should be [1, 3] for log-base10, and [0, 9.9658 (approx)] for log-base2. In general, top domain value is the log(base-x) of the top range value. Ditto with the bottom range value if it is higher than 0.
The simpler way: Leave the range as [0,1]. Then you can set the domain as [1, base]. The scale takes care of interpolation outside the range values.
Sorry about the vague title. I'm not sure how to concisely word what I'm about to ask. This is more of a math/algorithms question than a programming question.
In an app that I'm developing, we have a value that can fluctuate anywhere between 0 and a predetermined maximum (in testing it's usually hovered around 100, so let's just say 100). This range of data is continuous, meaning there are an infinite number of possible values- as long as it's between 0 and 100, it's possible.
Right now, any value returned from this is mapped to a different range that is also continuous- from 1000 to 200. So if the value from the first set is 100, I map it to 200, and if the value from the first set is 0, it gets mapped to 1000. And of course everything in between. This is what the code looks like:
-(float)mapToRange:(float)val withMax:(float)maxVal{
// Establish range constants.
const int upperBound = 1000;
const int lowerBound = 200;
const int bandwidth = upperBound - lowerBound;
// Make sure we don't go above the calibrated maximum.
if(val > maxVal)
val = maxVal;
// Scale the original value to our new boundaries.
float scaled = val/maxVal;
float ret = upperBound - scaled*bandwidth;
return ret;
}
Now, what I want to do is make it so that the higher original values (closer to 100) increase in larger increments than the lower original values (closer to 0). Meaning if I slowly start decreasing from 100 to 0 at a steady rate, the new values starting at 200 move quickly toward 1000 at first but go in smaller increments the closer they get to 1000. What would be the best way to go about doing this?
Your value scaled is basically the 0-100 value represented in the range 0-1 so it's good to work with. Try raising this to an integer power, and the result will increase faster near 1 and slower near 0. The higher the power, the larger the effect. So something like:
float scaled = val/maxVal;
float bent = scaled*scaled*scaled*scaled; // or however you want to write x^4
float ret = upperBound - bent*bandwidth;
Here's a sketch of the idea:
That is, the span A to B, maps to the smaller span a to b, while the span C to D maps to the larger span c to d. The larger the power of the polynomial, the more the curve will be bent into the lower right corner.
The advantage of using the 0 to 1 range is that the endpoints stay fixed since x^n=x when x is 0 or 1, but this, of course, isn't necessary as anything could be compensated for by the appropriate shifting and scaling.
Note also that this map isn't symmetric (though my drawing sort of looks that way), though course a symmetric curve could be chosen. If you want to curve to bend the other way, choose a power less than 1.
I have a fixed size 2D space that I would like to fill with an arbitrary number of equal sized squares. I'd like an algorithm that determines the exact size (length of one side) that these squares should be in order to fit perfectly into the given space.
Notice that there must be an integer number of squares which fill the width and height. Therefore, the aspect ratio must be a rational number.
Input: width(float or int), height(float or int)
Algorithm:
aspectRatio = RationalNumber(width/height).lowestTerms #must be rational number
# it must be the case that our return values
# numHorizontalSqaures/numVerticalSquares = aspectRatio
return {
numHorizontalSquares = aspectRatio.numerator,
numVerticalSquares = aspectRatio.denominator,
squareLength = width/aspectRatio.numerator
}
If the width/height is a rational number, your answer is merely any multiple of the aspect ratio! (e.g. if your aspect ratio was 4/3, you could fill it with 4x3 squares of length width/4=height/3, or 8x6 squares of half that size, or 12x9 squares of a third that size...) If it is not a rational number, your task is impossible.
You convert a fraction to lowest terms by factoring the numerator and denominator, and removing all duplicate factor pairs; this is equivalent to just using the greatest common divisor algorithm GCD(numer,denom) , and dividing both numerator and denominator by that.
Here is an example implementation in python3:
from fractions import Fraction
def largestSquareTiling(width, height, maxVerticalSquares=10**6):
"""
Returns the minimum number (corresponding to largest size) of square
which will perfectly tile a widthxheight rectangle.
Return format:
(numHorizontalTiles, numVerticalTiles), tileLength
"""
if isinstance(width,int) and isinstance(height,int):
aspectRatio = Fraction(width, height)
else:
aspectRatio = Fraction.from_float(width/height)
aspectRatio2 = aspectRatio.limit_denominator(max_denominator=maxVerticalSquares)
if aspectRatio != aspectRatio2:
raise Exception('too many squares') #optional
aspectRatio = aspectRatio2
squareLength = width/aspectRatio.numerator
return (aspectRatio.numerator, aspectRatio.denominator), squareLength
e.g.
>>> largestSquareTiling(2.25, 11.75)
((9, 47), 0.25)
You can tune the optional parameter maxVerticalSquares to give yourself more robustness versus floating-point imprecision (but the downside is the operation may fail), or to avoid a larger number of vertical squares (e.g. if this is architecture code and you are tiling a floor); depending on the range of numbers you are working with, a default value of maxVerticalSquares=500 might be reasonable or something (possibly not even including the exception code).
Once you have this, and a range of desired square lengths (minLength, maxLength), you just multiply:
# inputs
desiredTileSizeRange = (0.9, 0.13)
(minHTiles, minVTiles), maxTileSize = largestSquareTiling(2.25, 11.75)
# calculate integral shrinkFactor
shrinkFactorMin = maxTileSize/desiredTileSizeRange[0]
shrinkFactorMax = maxTileSize/desiredTileSizeRange[1]
shrinkFactor = int(scaleFactorMax)
if shrinkFactor<shrinkFactorMin:
raise Exception('desired tile size range too restrictive; no tiling found')
If shrinkFactor is now 2 for example, the new output value in the example would be ((9*2,47*2), 0.25/2).
If the sides are integers you need to find the Greatest common divisor between the sides A and B, that is the side of the square that you are looking for.
You want to use a space-filling-curve or a spatial index. A sfc recursivley subdivide the plane into 4 tiles and reduce the 2d complexity to a 1d complexity. You want to look for Nick's hilbert curve quadtree spatial index blog.
I think this is what you want. At least it solved the problem I was googling for when I found this question.
// width = width of area to fill
// height = height of area to fill
// sqareCount = number of sqares to fill the area with
function CalcSquareSide(float width, float height, int squareCount)
{
return Math.Sqrt((height * width) / squareCount);
}
I need help regarding DDA algorithm , i'm confused by the tutorial which i found online on DDA Algo , here is the link to that tutorial
http://i.thiyagaraaj.com/tutorials/computer-graphics/basic-drawing-techniques/1-dda-line-algorithm
Example:
xa,ya=>(2,2)
xb,yb=>(8,10)
dx=6
dy=8
xincrement=6/8=0.75
yincrement=8/8=1
1) for(k=0;k<8;k++)
xincrement=0.75+0.75=1.50
yincrement=1+1=2
1=>(2,2)
2) for(k=1;k<8;k++)
xincrement=1.50+0.75=2.25
yincrement=2+1=3
2=>(3,3)
Now i want to ask that , how this line came xincrement=0.75+0.75=1.50 , when it is written in theory that
"If the slope is greater than 1 ,the roles of x any y at the unit y intervals Dy=1 and compute each successive y values.
Dy=1
m= Dy / Dx
m= 1/ ( x2-x1 )
m = 1 / ( xk+1 – xk )
xk+1 = xk + ( 1 / m )
"
it should be xincrement=x1 (which is 2) + 0.75 = 2.75
or i am understanding it wrong , can any one please teach me the how it's done ?
Thanks a lot)
There seems to be a bit of confusion here.
To start with, let's assume 0 <= slope <= 1. In this case, you advance one pixel at a time in the X direction. At each X step, you have a current Y value. You then figure out whether the "ideal" Y value is closer to your current Y value, or to the next larger Y value. If it's closer to the larger Y value, you increment your current Y value. Phrased slightly differently, you figure out whether the error in using the current Y value is greater than half a pixel, and if it is you increment your Y value.
If slope > 1, then (as mentioned in your question) you swap the roles of X and Y. That is, you advance one pixel at a time in the Y direction, and at each step determine whether you should increment your current X value.
Negative slopes work pretty much the same, except you decrement instead of incrementing.
Pixels locations are integer values. Ideal line equations are in real numbers. So line drawing algorithms convert the real numbers of a line equation into integer values. The hard and slow way to draw a line would be to evaluate the line equation at each x value on your array of pixels. Digital Differential Analyzers optimize that process in a number of ways.
First, DDAs take advantage of the fact that at least one pixel is known, the start of the line. From that pixel, the DDAs calculate the next pixel in the line, until they reach the end point of the line.
Second, DDAs take advantage of the fact that along either the x or y axis, the next pixel in the line is always the next integer value towards the end of the line. DDA's figure out which axis by evaluating the slope. Positive slopes between 0 and 1 will increment the x value by 1. Positive slopes greater than one will increment the y value by 1. Negative slopes between -1 and 0 will increment the x value by -1, and negative slopes less than -1 will increment the y value by -1.
Thrid, DDAs take advantage of the fact that if the change in one direction is 1, the change in the other direction is a function of the slope. Now it becomes much more difficult to explain in generalities. Therefore I'll just consider positive slopes between 0 and 1. In this case, to find the next pixel to plot, x is incremented by 1, and the change in y is calculated. One way to calculate the change in y is just add the slope to the previous y, and round to the integer value. This doesn't work unless you maintain the y value as a real number. Slopes greater than one can just increment y by 1, and calculate the change in x.
Fourth, some DDAs further optimize the algorithm by avoiding floating point calculations. For example, Bresenham's line algorithm is a DDA optimized to use integer arithmetic.
In this example, a line from (2, 2) to (8, 10), the slope is 8/6, which is greater than 1. The first pixel is at (2, 2). The next pixel is calculated by incrementing the y value by 1, and adding the change in x (the inverse slope, of dx/dy = 6/8 = .75) to x. The value of x would be 2.75 which is rounded to 3, and (3, 3) is plotted. The third pixel would increment y again, and then add the change in x to x (2.75 + .75 = 3.5). Rounding would plot the third pixel at (4, 4). The fourth pixel would then plot (5, 4), since y would be incremented by 1, but x would be incremented by .75, and equal 4.25.
From this example, can you see the problem with your code?