When I cast(2.555555 as Decimal64(2)), clickhouse cut off decimal part directly to 2.55, how can I round half up to 2.56? Currently I have to call round() function before casting.
SELECT CAST(round(2.555555, 2), 'Decimal64(2)') AS r
/*
┌────r─┐
│ 2.56 │
└──────┘
*/
Related
I'm trying to create maze with a recursive backtracking algorithm / eller's method with pair columns and pair lines.
There's a problem, if the columns / lines are pair.
For instance if I have a map of 4*4 :
X***
****
****
***X
The up left corner is the beginning, the other the end. Basically, I can't create a sort of "checkboard" as with a 5*5 maze (or any odd number I think).
X|X|X
-----
X|X|X
-----
X|X|X
Because it would do for a 4*4:
X|X|
----
X|X|
----
(Knowing that I have to reach the bottom right corner from the top left one).
Does algorithms exist with this paterne or do I have to find a way to include these paterne with recursive backtracking / eller's method.
Also, the width of the path can be more than 1 unit.
Example for a 12 * 4 :
* = empty cells
X = walls
1*XXXX****XX
X****X*X**XX
X*XX***XX***
X*XXXX***XX2
Entry is 1, exit 2
Another example for a 4*4 :
1*XX
X**X
***X
XX*2
Entry point and exit are always the same and the width of the path can be one or more, it does not matter.
Thanks !
If your data structure requires that grid cells are used for walls, then indeed you must make sure that the width and height of your grid are odd. If you have a recursive process, then you have to make sure that the subgrids also have this property (odd number of rows and columns). Two adjacent subgrids need to share one column or one row. So make sure you correctly divide the grid for the recursive process.
If for example you want to apply the iterative Eller's algorithm to potentially generate the following example:
1*XXXX****XX
X****X*X**XX
X*XX***XX***
X*XXXX***XX2
Then you need to see each character as a cell (also the X cells), not as a wall. The walls are between the cells. So you can picture the above grid in the following "syntax":
┌───────┬───────────────┬───────────────┬───────┐
│ 1 │ x x x x │ │ x x │
├───┐ └───────────┐ │ ┌───┐ │ │
│ x │ │ x │ │ x │ │ x x │
│ │ ┌───────┐ └───┘ │ └───┐ └───────┤
│ x │ │ x x │ │ x x │ │
│ │ │ └───────┐ └───────┼───────┐ │
│ x │ │ x x x x │ │ x x │ 2 │
└───┴───┴───────────────┴───────────┴───────┴───┘
So here the areas with "x" are not really walls -- they are enclosures that are unreachable. This would not happen with the above referenced algorithm, because of the steps that say:
add at least one vertical connection
...and:
... the last row. This time, we must connect ALL adjacent (but disjoint) cells
If you omit these requirements, such enclosures could be created. The walls in this syntax (used by the referenced algorithm) are separations (lines) which are either horizontal or vertical. They don't translate themselves to "X" in your notation.
I finally decided to use the recursive backtracking algorithm because it is more optimised to create a maze as in C (the langage I used to create a maze). Using the Eller's method, I should've create an integer array then switch to a char array.
To solve the issue with pair columns / rows, here is the trick.
Let's assume that I have a 4*4 maze (including walls). Thus, at the beginning it will render as
XXXX 1XXX
XXXX XXXX
XXXX -> Let's add the entry point (1) + exit (2) -> XXXX
XXXX XXX2
We haven't defined the walls at the moment, as they account for a cell in our case.
What we will do is "set" the walls every two rows beginning at first.
1|X|
---- --> - and | are walls
X|X|
---2
We can now use the recursive backtracking algorithm, except we will advance every time of two cells instead of one. However, as the exit point is considered as a wall, you will never reach the exit. Just set randomly at the beginning of your algorithm an empty cell up to the exit or on the left of it.
1XXX 1XXX
XXXX XXXX
XXX* OR XXXX --> * = empty cells (reachable in the maze)
XXX2 XX*2
I am implementing a VHDL 8 bit fixed point multiplication module which returns an 8bit truncated number but I have a problem when I do multiplications by hand in order to test it. The problem arises when I want to multiply two negative numbers.
I tried multiplying two positive values 1.67 * 0.625 ~ 1.04(0.906 in binary multipication).
001.10101 -> 1.67
000.10100 -> 0.625
------------
000000.1110111010 = 000.11101 (truncated to 8bits = 0.906)
I tried multiplying negative and positive numbers (-0.875 * 3 ~2.62)
111.00100 -> -0.875
011.00000 -> 3
----------
010101.0110000000 = 101.01100 (truncated to 8bits = -2.625)
So far everythng is working properly. The problem comes hen I try to multiply two negative numbers. According to what I know (unless I'm mistaken):
- multiplying two numbers will give a result with twice the resolution (multiply two 8 bit numbers and you get a 16 bit number)
- the fixed point gets dislocated as well. In this example there are 3 bits before the fixed and 5 points after. This means that in the resulting number the fixed point will have 6 digits before the point and 10 bits after the point.
By assuming this the above calculations worked properly. But when I try to multiply two negative values (-0.875 * -1.91 ~ 1.67)
110.00010 -> -1.91 (1.9375)
111.00100 -> -0.875
------------
101011.0011001000 = 011.00110(truncated to 8 bits = 3.1875)
Naturally, I tried another negative multiplication (-2.64 * -0.875 = 2.31)
101.01011 -> -2.64
111.00100 -> -0.875
----------
100110.0001001100 = 110.00010 (truncated to 8bits = -1.9375)
Clearly I'm doing something wrong, but I just can't see what I'm doing wrong.
PS: I haven't implemented it yet. The thought came to me I figured out how I was going to do it and then I tried to test it by hand with some simple examples. And I also tried more multiplications. I thought that maybe they worked out because I was lucky, but apparently not, I tried a few more multiplications and they worked. So maybe I'm doing something wrong when multiplying two negative numbers, maybe I'm truncating it wrong? Probably.
EDIT:
Ok, I found a Xilinx document that states how multiplication is made when the two operands are negative, here is the link. According to this docuent, in order to this document, this can only be done when doing extended multiplication. And the last partial sum for the multiplication must be inverted and then add 1 to it and it will result in the correct number.
In order to the multiplications I used windows' calculator in programmer mode, which means that in order to multiply the 8 bits I put the numbers in the calculator and then got the result and truncated it. If they worked for the other cases it means that the windows calculator is doing a direct multiplication (adding all the partial sums as they should be instead of inverting the last partial sum). So, this means that in order to obtain the real result I should substract the first operand from the final result and then add the first operand inverted + 1
110.00010 -> -1.91 (1.9375)
111.00100 -> -0.875
------------
101011.0011001000
Which gave me the result: 000010.0111001000 = 010.01110(truncated to 8bits =2.43)
And the with the other one I came up with the result of 1.875. Those outputs aren't exactly great, but at least they are closer to what I expected. Is there any other way to do this in an easier way?
Your intermediate results are wrong, so that, the truncation did not work as expected. Moreover, the truncation is only possible without overflow if the four top-most bit of the intermediate result are equal in your format.
You should use signed data-types to do the multiplication right.
Even your second example is wrong. The intermediate binary result 010101.0110000000 represents the decimal number 21.375 which is not the product of -0.875 and 3. So, let's do the multiplication by hand:
a * b = -0.875 * 3 = -2.625
111.00100 * 011.00000
---------------------
. 00000000 // further lines containing only zeros have been omitted
+ .01100000
+ 011.00000
+ 0110.0000
+ 110100.000 // add -(2^2) * b !
= 111101.0110000000 = -2.625 (intermediate result)
= 101.01100 = -2.625 after truncation
You have to add the two's complement of b in the last partial sum because the '1' in the top-most bit of a represent the value -(2^2) = -4. Truncation without overflow is possible here because the 4 top-most bits of the intermediate result are equal.
And now the third example
a b = -1.9375 * -0.875 = 1.6953125
110.00010 * 111.00100
---------------------
. 00000000 // further lines containing only zeros have been omitted
+ 111111.111100100 // sign-extended partial-sum
+ 111110.0100 // sign-extended partial-sum
+ 000011.100 // add -4 * b
= 000001.101100100 = 1.6953125 (intermediate result)
~ 001.10110 = 1.6875 after truncation
As b is a signed number, one has always sign-extend the partial sum to the width of the intermediate result. Of course, this has also been done in the calculation of the second example, but there it does not make a difference.
How to generate pseudo random numbers and row-counts in Tableau? I didn't find any built-in functions (like 'RAND', 'RCOUNT').
Edit:
Just learned that there is a Random() function in Tableau. It is not in the library but if you use it anyway, it will tell you that the formula is valid and create a value between 0 and 1.
Original and still valid answer in case you want to use officially supported functions:
Since Tableau is used to create graphs based on your data, there is usually little use for random numbers (would you explain what you need them for?)
However you could use an approach like this to work around this limitation: http://community.tableau.com/docs/DOC-1474
Basically getting a semi-random seed out of the time, combine it with other values based on table calculations and multiplying it with other semi-random values
Seed
(DATEPART('second', NOW()) + 1) * (DATEPART('minute', NOW()) + 1) * (DATEPART('hour', NOW()) + 1) * (DATEPART('day', NOW()) + 1)
Random Number
((PREVIOUS_VALUE(MIN([Seed])) * 1140671485 + 12820163) % (2^24))
Random Int
INT([Random Number] / (2^24) * [Random Upper Limit]) + 1
Where [Random Upper Limit] is a user defined value to limit the range of the result.
Oracle document (http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions088.htm#i77996) says that "MOD returns the remainder of n2 divided by n1. Returns n2 if n1 is 0.". But I got an unexpected result (I thought it should be 1.1 but I got 0) when I put a binary_float in n2.
SQL> select mod(1.1,0), to_binary_float('1.1'), mod(to_binary_float('1.1'), 0) from dual;
MOD(1.1,0) TO_BINARY_FLOAT('1.1') MOD(TO_BINARY_FLOAT('1.1'),0)
---------- ---------------------- -----------------------------
1.1 1.1E+000 0
Does anyone has any idea?
Interesting. I think it has something to do with FLOOR vs ROUND used internally in the calculations.
For example, the REMAINDER function is very similar to mod, except is uses ROUND instead of FLOOR. For this example, it will return NAN (not a number):
select remainder(to_binary_float(1.1), 0) from dual
Output:
NAN
Whats more interesting is that I can use NANVL function to provide a default value if NAN is returned (in this case, mimic the MOD behavior and return 1.1), and it will return a float value:
select nanvl(remainder(to_binary_float(1.1), 0), 1.1) from dual
Output:
1.10000002384186
So perhaps thats your workaround.
Hope that helps
I have an array Array = {}, size of array is n
My constraints are like this:
n <= 100000
and Arrayi <=100
I have to find the products of all the elements in the array, I will be given a mod value with which I have to mod the product. The mod will value changes all the time and this mod value is always less than or equal to n.
My problem is when I chose, a global mod value say R = 1000000000 (which is far bigger than mod constraint) and whenever my product exceeds this value, I mod the result.
But I dont know why the result Im obtaining is zero.
My question is how do I chose R in such situations?
I dont know your code but it is likey that 0 is correct result.
Pick R large prime and make sure that none of elements is divisible by this number in order to get result different from 0.
You haven't showed us your code, but presumably it looks something like the following pseudo-Python code:
limit = 1000000
def product_mod( array, m ):
product = 1
for k in array:
product = product * k
if product > limit: product = product % m
return product % m
This algorithm should work, provided that the limit is low enough that product * k can never overflow. If it doesn't, you probably have a bug in your code.
However, note that it's quite likely that the result of this function may often be legitimately zero: specifically, this will happen whenever the product of the numbers in your array evenly divides the modulus. Since the product will typically be a highly composite number (it will have at least as many factors as there are numbers in the array), this is pretty likely.
In particular, the output of the function will be zero whenever:
any one of the numbers in the array is zero,
any one of the numbers in the array is a multiple of the modulus, or
the product of any subset of the numbers in the array is a multiple of the modulus.
In all those cases, the product of all the numbers in the array will be either zero or a multiple of the modulus (and will thus reduce to zero).
It sounds that you are calculating the product of the values in the array modulo some given value, but are using another value to limit integer overflow in the intermediate calculations. Then you have a high risk of getting a wrong result.
E.g. 120 mod 9 = 3 while (120 mod 100) mod 9 = 20 mod 9 = 2
The correct procedure is then to do all the calculations modulo the same number as you are to use for the final result since (a * b) mod n = (a mod n) * (b mod n) mod n
E.g. (24 * 5) mod 9 = (24 mod 9) * (5 mod 9) mod 9 = (6 * 5) mod 9 = 30 mod 9 = 3