According the man page of getNext in the PCGRandom module, we can generate random numbers in a given range, for example:
use Random;
var rng1 = new owned RandomStream( eltType= real, seed= 100 );
var rng2 = new owned RandomStream( eltType= int, seed= 100 );
for i in 1..5 do
writeln( rng1.getNext( min= 3.0, max= 5.0 ) );
writeln();
for i in 1..5 do
writeln( rng2.getNext( min= 20, max= 80 ) );
which gives (with chpl-1.20.0):
4.50371
4.85573
4.2246
4.84289
3.63607
36
57
79
39
57
Here, I noticed that the man page gives the following notes for both the integer and real-number cases:
For integers, this class uses a strategy for generating a value in a particular range that has not been subject to rigorous study and may have statistical problems.
For real numbers, this class generates a random value in [max, min] by computing a random value in [0,1] and scaling and shifting that value. Note that not all possible floating point values in the interval [min, max] can be constructed in this way.
(where I used italics for emphasis). For real numbers, is this related to the so-called "density of floating-point number", e.g. asked in this page)? Also, for integers, is there some case that we need to be careful even for "typical" use?
(here, "typical" means, e.g., a generation of 10**8 random integers distributed approximately flat in a given range.)
FYI, my "use case" is not something like rigorous quality tests for random numbers, but just typical Monte Carlo calculations (e.g., selecting random sites on a cubic lattice).
The notes in the manual page are indicating a difference from the other PCG random number methods that have been studied (by the author of the PCG algorithm at the very least).
The issue with floating-point numbers is indeed related to floating-point number density. See http://www.pcg-random.org/using-pcg-c-basic.html#generating-doubles from the PCG author. It is a potential problem even when generating random numbers in [0.0, 1.0]. This paragraph from the documentation describes the issue:
When generating a real, imaginary, or complex number, this
implementation uses the strategy of generating a 64-bit unsigned
integer and then multiplying it by 2.0**-64 in order to convert it to
a floating point number. While this does construct a uniform
distribution on rounded floating point values, it leaves out many
possible real values (for example, 2**-128). We believe that this
strategy has reasonable statistical properties. One side effect of
this strategy is that the real number 1.0 can be generated because of
rounding. The real number 0.0 can be generated because PCG can produce
the value 0 as a random integer.
Note that a 64-bit real can store numbers as small as 2.0**-1024 but it is quite impossible to get such a number by dividing a positive integer by 2**64. (Here and in the above I am using ** as the exponentiation operator, as that is what it does in Chapel syntax). I recommend reading up on IEEE floating point formats (e.g. https://en.wikipedia.org/wiki/IEEE_754 or https://en.wikipedia.org/wiki/Double-precision_floating-point_format ) for background information in this area. You might care about this if you were using an RNG to generate test inputs to an algorithm operating on real(64) values. In that event you might wish for even the very small values to be generated. Note though that constructing an RNG that can generate all real(64) values in a non-uniform manner is not so hard (e.g. just by copying the bits from a uint into a real).
Regarding the other part of your question:
I did some basic statistical testing with the generation of random integers in a particular range with TestU01 and I'd be confident in its use with Monte Carlo calculations. However I am not an expert in this area and as a result I put that warning in the documentation. The below information from the documentation describes the testing that I did:
We have tested this implementation with TestU01 (available at
http://simul.iro.umontreal.ca/testu01/tu01.html ). We measured our
implementation with TestU01 1.2.3 and the Crush suite, which consists
of 144 statistical tests. The results were:
no failures for generating uniform reals
1 failure for generating 32-bit values (which is also true for the reference version of PCG with the same configuration)
0 failures for generating 64-bit values (which we provided to TestU01 as 2
different 32-bit values since it only accepts 32 bits at a time)
0 failures for generating bounded integers (which we provided to TestU01 by requesting values in [0..,2**31+2**30+1) until we had two values < 2**31, removing the top 0 bit, and then combining the top 16 bits into the value provided to TestU01).
Related
This question is not so much about the C as about the algorithm. I need to implement strtof() function, which would behave exactly the same as GCC one - and do it from scratch (no GNU MPL etc.).
Let's skip checks, consider only correct inputs and positive numbers, e.g. 345.6e7. My basic algorithm is:
Split the number into fraction and integer exponent, so for 345.6e7 fraction is 3.456e2 and exponent is 7.
Create a floating-point exponent. To do this, I use these tables:
static const float powersOf10[] = {
1.0e1f,
1.0e2f,
1.0e4f,
1.0e8f,
1.0e16f,
1.0e32f
};
static const float minuspowersOf10[] = {
1.0e-1f,
1.0e-2f,
1.0e-4f,
1.0e-8f,
1.0e-16f,
1.0e-32f
};
and get float exponent as a product of corresponding bits in integer exponent, e.g. 7 = 1+2+4 => float_exponent = 1.0e1f * 1.0e2f * 1.0e4f.
Multiply fraction by floating exponent and return the result.
And here comes the first problem: since we do a lot of multiplications, we get a somewhat big error becaule of rounding multiplication result each time. So, I decided to dive into floating point multiplication algorithm and implement it myself: a function takes a number of floats (in my case - up to 7) and multiplies them on bit level. Consider I have uint256_t type to fit mantissas product.
Now, the second problem: round mantissas product to 23 bits. I've tried several rounding methods (round-to-even, Von Neumann rounding - a small article about them), but no of them can give the correct result for all the test numbers. And some of them really confuse me, like this one:
7038531e-32. GCC's strtof() returns 0x15ae43fd, so correct unbiased mantissa is 2e43fd. I go for multiplication of 7.038531e6 (biased mantissa d6cc86) and 1e-32 (b.m. cfb11f). The resulting unbiased mantissa in binary form is
( 47)0001 ( 43)0111 ( 39)0010 ( 35)0001
( 31)1111 ( 27)1110 ( 23)1110 ( 19)0010
( 15)1011 ( 11)0101 ( 7)0001 ( 3)1101
which I have to round to 23 bits. However, by all rounding methods I have to round it up, and I'll get 2e43fe in result - wrong! So, for this number the only way to get correct mantissa is just to chop it - but chopping does not work for other numbers.
Having this worked on countless nights, my questions are:
Is this approach to strtof() correct? (I know that GCC uses GNU MPL for it, and tried to see into it. However, trying to copy MPL's implementation would require porting the entire library, and this is definitely not what I want). Maybe this split-then-multiply algorithm is inevitably prone to errors? I did some other small tricks, (e.g. create exponent tables for all integer exponents in float range), but they led to even more failed conversions.
If so, did I miss something while rounding? I thought so for long time, but this 7038531e-32 number completely confused me.
If I want to be as precise as I can I usually do stuff like this (however I usually do the reverse operation float -> text):
use only integers (no floats what so ever)
as you know float is integer mantissa bit-shifted by integer exponent so no need for floats.
For constructing the final float datatype you can use simple union with float and 32 bit unsigned integer in it ... or pointers to such types pointing to the same address.
This will avoid rounding errors for numbers that fit completely and shrink error for those that don't fit considerably.
use hex numbers
You can convert your text of decadic number on the run into its hex counterpart (still as text) from there creating mantissa and exponent integers is simple.
Here:
How to convert a gi-normous integer (in string format) to hex format? (C#)
is C++ implementation example of dec2hex and hex2dec number conversions done on text
use more bits for mantissa while converting
for task like this and single precision float I usually use 2 or 3 32 bit DWORDs for the 24 bit mantissa to still hold some precision after the multiplications If you want to be precise you have to deal with 128+24 bits for both integer and fractional part of number so 5x32 bit numbers in sequence.
For more info and inspiration see (reverse operation):
my best attempt to print 32 bit floats with least rounding errors (integer math only)
Your code will be just inverse of that (so many parts will be similar)
Since I post that I made even more advanced version that recognize formatting just like printf , supports much more datatypes and more without using any libs (however its ~22.5 KByte of code). I needed it for MCUs as GCC implementation of prints are not very good there ...
My application requires a fractional quantity multiplied by a monetary value.
For example, $65.50 × 0.55 hours = $36.025 (rounded to $36.03).
I know that floats should not be used to represent money, so I'm storing all of my monetary values as cents. $65.50 in the above equation is stored as 6550 (integer).
For the fractional coefficient, my issue is that 0.55 does not have a 32-bit float representation. In the use case above, 0.55 hours == 33 minutes, so 0.55 is an example of a specific value that my application will need to account for exactly. The floating point representation of 0.550000012 is insufficient, because the user will not understand where the additional 0.000000012 came from. I cannot simply call a rounding function on 0.550000012 because it will round to the whole number.
Multiplication solution
To solve this, my first idea was to store all quantities as integers and multiply × 1000. So 0.55 entered by the user would become 550 (integer) when stored. All calculations would happen without floats, and then simply divide by 1000 (integer division, not float) when presenting the result to the user.
I realize that this would permanently limit me to 3 decimal places of
precision. If I decide that 3 is adequate for the lifetime of my
application, does this approach make sense?
Are there potential rounding issues if I were to use integer division?
Is there a name for this process? EDIT: As indicated by #SergGr, this is fixed-point arithmetic.
Is there a better approach?
EDIT:
I should have clarified, this is not time-specific. It is for generic quantities like 1.256 pounds of flour, 1 sofa, or 0.25 hours (think invoices).
What I'm trying to replicate here is a more exact version of Postgres's extra_float_digits = 0 functionality, where if the user enters 0.55 (float32), the database stores 0.550000012 but when queried for the result returns 0.55 which appears to be exactly what the user typed.
I am willing to limit this application's precision to 3 decimal places (it's business, not scientific), so that's what made me consider the × 1000 approach.
I'm using the Go programming language, but I'm interested in generic cross-language solutions.
Another solution to store the result is using the rational form of the value. You can explain the number by two integer value which the number is equal p/q, such that both p and q are integers. Hence, you can have more precision for your numbers and do some math with the rational numbers in the format of two integers.
Note: This is an attempt to merge different comments into one coherent answer as was requested by Matt.
TL;DR
Yes, this approach makes sense but most probably is not the best choice
Yes, there are rounding issues but there inevitably will be some no matter what representation you use
What you suggest using is called Decimal fixed point numbers
I'd argue yes, there is a better approach and it is to use some standard or popular decimal floating point numbers library for your language (Go is not my native language so I can't recommend one)
In PostgreSQL it is better to use Numeric (something like Numeric(15,3) for example) rather than a combination of float4/float8 and extra_float_digits. Actually this is what the first item in the PostgreSQL doc on Floating-Point Types suggests:
If you require exact storage and calculations (such as for monetary amounts), use the numeric type instead.
Some more details on how non-integer numbers can be stored
First of all there is a fundamental fact that there are infinitely many numbers in the range [0;1] so you obviously can't store every number there in any finite data structure. It means you have to make some compromises: no matter what way you choose, there will be some numbers you can't store exactly so you'll have to round.
Another important point is that people are used to 10-based system and in that system only results of division by numbers in a form of 2^a*5^b can be represented using a finite number of digits. For every other rational number even if you somehow store it in the exact form, you will have to do some truncation and rounding at the formatting for human usage stage.
Potentially there are infinitely many ways to store numbers. In practice only a few are widely used:
floating point numbers with two major branches of binary (this is what most today's hardware natively implements and what is support by most of the languages as float or double) and decimal. This is the format that store mantissa and exponent (can be negative), so the number is mantissa * base^exponent (I omit sign and just say it is logically a part of the mantissa although in practice it is usually stored separately). Binary vs. decimal is specified by the base. For example 0.5 will be stored in binary as a pair (1,-1) i.e. 1*2^-1 and in decimal as a pair (5,-1) i.e. 5*10^-1. Theoretically you can use any other base as well but in practice only 2 and 10 make sense as the bases.
fixed point numbers with the same division in binary and decimal. The idea is the same as in floating point numbers but some fixed exponent is used for all the numbers. What you suggests is actually a decimal fixed point number with the exponent fixed at -3. I've seen a usage of binary fixed-point numbers on some embedded hardware where there is no built-in support of floating point numbers, because binary fixed-point numbers can be implemented with reasonable efficiency using integer arithmetic. As for decimal fixed-point numbers, in practice they are not much easier to implement that decimal floating-point numbers but provide much less flexibility.
rational numbers format i.e. the value is stored as a pair of (p, q) which represents p/q (and usually q>0 so sign stored in p and either p=0, q=1 for 0 or gcd(p,q) = 1 for every other number). Usually this requires some big integer arithmetic to be useful in the first place (here is a Go example of math.big.Rat). Actually this might be an useful format for some problems and people often forget about this possibility, probably because it is often not a part of a standard library. Another obvious drawback is that as I said people are not used to think in rational numbers (can you easily compare which is greater 123/456 or 213/789?) so you'll have to convert the final results to some other form. Another drawback is that if you have a long chain of computations, internal numbers (p and q) might easily become very big values so computations will be slow. Still it may be useful to store intermediate results of calculations.
In practical terms there is also a division into arbitrary length and fixed length representations. For example:
IEEE 754 float or double are fixed length floating-point binary representations,
Go math.big.Float is an arbitrary length floating-point binary representations
.Net decimal is a fixed length floating-point decimal representations
Java BigDecimal is an arbitrary length floating-point decimal representations
In practical terms I'd says that the best solution for your problem is some big enough fixed length floating point decimal representations (like .Net decimal). An arbitrary length implementation would also work. If you have to make an implementation from scratch, than your idea of a fixed length fixed point decimal representation might be OK because it is the easiest thing to implement yourself (a bit easier than the previous alternatives) but it may become a burden at some point.
As mentioned in the comments, it would be best to use some builtin Decimal module in your language to handle exact arithmetic. However, since you haven't specified a language, we cannot be certain that your language may even have such a module. If it does not, here is how to go about doing so.
Consider using Binary Coded Decimal to store your values. The way it works is by restricting the values that can be stored per byte to 0 through 9 (inclusive), "wasting" the rest. You can encode a decimal representation of a number byte by byte that way. For example, 613 would become
6 -> 0000 0110
1 -> 0000 0001
3 -> 0000 0011
613 -> 0000 0110 0000 0001 0000 0011
Where each grouping of 4 digits above is a "nibble" of a byte. In practice, a packed variant is used, where two decimal digits are packed into a byte (one per nibble) to be less "wasteful". You can then implement a few methods to do your basic addition, subtract, multiplication, etc. Just iterate over an array of bytes, and perform your classic grade school addition / multiplication algorithms (keep in mind for the packed variant that you may need to pad a zero to get an even number of nibbles). You just need to keep a variable to store where the decimal point is, and remember to carry where necessary to preserve the encoding.
I am confused between these three functions and I was wondering for some explanation. If I set the range how do I make the range exclusive or inclusive? Are the ranges inclusive or exclusive if I don't specify the range?
In addition to the answer from #dave_59, there are other important differences:
i) $random returns a signed 32-bit integer; $urandom and $urandom_range return unsigned 32-bit integers.
ii) The random number generator for $random is specified in IEEE Std 1800-2012. With the same seed you will get exactly the same sequence of random numbers in any SystemVerilog simulator. That is not the case for $urandom and $urandom_range, where the design of the random number generator is up to the EDA vendor.
iii) Each thread has its own random number generator for $urandom and $urandom_range, whereas there is only one random number generator for $random shared between all threads (ie only one for the entire simulation). This is really important, because having separate random number generators for each thread helps you simulation improve a property called random stability. Suppose you are using a random number generator to generate random stimulus. Suppose you find a bug and fix it. This could easily change the order in which threads (ie initial and always blocks) are executed. If that change changed the order in which random numbers were generated then you would never know whether the bug had gone away because you'd fixed it or because the stimulus has changed. If you have a random number generator for each thread then your testbench is far less vulnerable to such an effect - you can be far more sure that the bug has disappeared because you fixed it. That property is called random stability.
So, As #dave_59 says, you should only be using $urandom and $urandom_range.
You should only be using $urandom and $urandom_range. These two functions provide better quality random numbers and better seed initialization and stability than $random. The range specified by $urandom_range is always inclusive.
Although $random generates the exact same sequence of random numbers for each call, it is extremely difficult to keep the same call ordering as soon as any change is made to the design or testbench. Even more difficult when multiple threads are concurrently generating random numbers.
I am struggling to find a way to generate a random number within a given interval in PostScript.
Basically PostScript has three functions to help you generate (pseudo-)random numbers. Those are rand, srand and rrand.
The later two are for passing a seed to the number generator to be able to reproduce specific results. At least that´s what I understood they are for. Anyway they don´t seem suitable for my case.
So rand seems to be the only function I can use to generate a random number, but...
rand returns a random integer in the range 0 to 231 − 1 (From the PostScript Language Reference, page 637 (651 in the PDF))
This is far beyond the the interval I´m looking for. I am more interested in values up to small thousands, maybe 10.000 or something like that and small float values, up to 100, all with the lower limit of 0.
I thought I could just narrow my numbers down by simple divisions and extracting the root but that tends to give me unusable small values in quite a lot cases. I am wondering if there are robust ways to either shrink a large number down to what I need or, I´d prefer that, only generate numbers in the desired interval.
Besides: while-loops are not possible in PostScript, otherwise I´d have written a function to generate numbers until they fit in my interval.
Any hints on what to look for breaking numbers down into my interval?
mod is often good enough and it's fast. But you may get a more uniform distribution by using floating-point ops.
rand 16#7fffffff div 100 mul cvi
This is because mod discards the upper bits of the input. And the PRNG is usually trying to randomize over all the bits. By scaling down then up, they all contribute something in the way of rounding effects.
Just use the modulo operator to get it down to the size you want:
GS>rand 100 mod stack
7
An old idea, but ever since then I couldn't get around finding some reasonably good way to solve the problem it raised. So I "invented" (see below) a very compact, and in my opinion, reasonably well performing PRNG, but I can't get to figure out algorithms to build suitable seed values for it at large bit depths. My current solution is simply brute-forcing, it's running time is O(n^3).
The generator
My idea came from XOR taps (essentially LFSRs) some old 8bit machines used for sound generation. I fiddled with XOR as a base on a C64, tried to put together opcodes, and experienced with the result. The final working solution looked like this:
asl
adc #num1
eor #num2
This is 5 bytes on the 6502. With a well chosen num1 and num2, in the accumulator it iterates over all 256 values in a seemingly random order, that is, it looks reasonably random when used to fill the screen (I wrote a little 256b demo back then on this). There are 40 suitable num1 & num2 pairs for this, all giving decent looking sequences.
The concept can be well generalized, if expressed in pure C, it may look like this (BITS being the bit depth of the sequence):
r = (((r >> (BITS-1)) & 1U) + (r << 1) + num1) ^ num2;
r = r & ((1U<<BITS)-1U);
This C code is longer since it is generalized, and even if one would use the full depth of an unsigned integer, C wouldn't have the necessary carry logic to transfer the high bit of the shift to the add operation.
For some performance analysis and comparisons, see below, after the question(s).
The problem / question(s)
The core problem with the generator is finding suitable num1 and num2 which would make it iterate over the whole possible sequence of a given bit depth. At the end of this section I attach my code which just brute-forces it. It will finish in reasonable time for up to 12 bits, you may wait for all 16 bits (there are 5736 possible pairs for that by the way, acquired with an overnight full search a while ago), and you may get a few 20 bits if you are patient. But O(n^3) is really nasty...
(Who will get to find the first full 32bit sequence?)
Other interesting questions which arise:
For both num1 and num2 only odd values are able to produce full sequences. Why? This may not be hard (simple logic, I guess), but I never reasonably proved it.
There is a mirroring property along num1 (the add value), that is, if 'a' with a given 'b' num2 gives a full sequence, then the 2 complement of 'a' (in the given bit depth) with the same num2 is also a full sequence. I only observed this happening reliably with all the full generations I calculated.
A third interesting property is that for all the num1 & num2 pairs the resulting sequences seem to form proper circles, that is, at least the number zero seems to be always part of a circle. Without this property my brute force search would die in an infinite loop.
Bonus: Was this PRNG already known before? (and I just re-invented it)?
And here is the brute force search's code (C):
#define BITS 16
#include "stdio.h"
#include "stdlib.h"
int main(void)
{
unsigned int r;
unsigned int c;
unsigned int num1;
unsigned int num2;
unsigned int mc=0U;
num1=1U; /* Only odd add values produce useful results */
do{
num2=1U; /* Only odd eor values produce useful results */
do{
r= 0U;
c=~0U;
do{
r=(((r>>(BITS-1)) & 1U)+r+r+num1)^num2;
r&=(1U<<(BITS-1)) | ((1U<<(BITS-1))-1U); /* 32bit safe */
c++;
}while (r);
if (c>=mc){
mc=c;
printf("Count-1: %08X, Num1(adc): %08X, Num2(eor): %08X\n", c, num1, num2);
}
num2+=2U;
num2&=(1U<<(BITS-1)) | ((1U<<(BITS-1))-1U);
}while(num2!=1U);
num1+=2U;
num1&=((1U<<(BITS-1))-1U); /* Do not check complements */
}while(num1!=1U);
return 0;
}
This, to show it is working, after each iteration will output the pair found if it's sequence length is equal or longer than the previous. Modify the BITS constant for sequences of other depths.
Seed hunting
I did some graphing relating to the seeds. Here is a nice image showing all the 9bit sequence lengths:
The white dots are the full length sequences, X axis is for num1 (add), Y axis is for num2 (xor), the brighter the dot, the longer the sequence. Other bit depth look very similar in pattern: they all seem to be broken up to sixteen major tiles with two patterns repeating with mirroring. The similarity of the tiles is not complete, for example above a diagonal from the up-left corner to the bottom-right is clearly visible while it's opposite is absent, but for the full-length sequences this property seems to be reliable.
Relying on this it is possible to reduce the work even more than by the previous assumptions, but that's still O(n^3)...
Performance analysis
As of current the longest sequences possible to be generated are 24bits: on my computer it takes at about 5 hours to brute-force a full 24bit sequence for this. This is still just so-so for real PRNG tests such as Diehard, so as of now I rather gone by an own approach.
First it's important to understand the role of the generator. This by no means would be a very good generator for it's simplicity, it's goal is rather to produce decent numbers blazing fast. On this region not needing multiply / divide operations, a Galois LFSR can produce similar performance. So my generator is of any use if it is capable to outperform this one.
The test I performed were all of 16bit generators. I chose this depth since it gives an useful sequence length while the numbers may still be broken up in two 8bit parts making it possible to present various bit-exact graphs for visual analysis.
The core of the tests were looking for correlations along previous and currently generated numbers. For this I used X:Y plots where the previous generation was the Y, the current the X, both broken up to low / high parts as above mentioned for two graphs. I created a program capable of plotting these stepped in real time so to also make it possible to roughly examine how the numbers follow each other, how the graphs fill up. Here obviously only the end results are shown as the generators ran through their full 2^16 or 2^16-1 (Galois) cycle.
The explanation of the fields:
The images consist 8x2 256x256 graphs making the total image size 2048x512 (check them at original size).
The top left graph just confirms that indeed a full sequence was plotted, it is simply an X = r % 256; Y = r / 256; plot.
The bottom left graph shows every second number only plotted the same way as the top, just confirming that the numbers occur reasonably randomly.
From the second graph the top row are the high byte correlation graphs. The first of them uses the previous generation, the next skips one number (so uses 2nd previous generation), and so on until the 7th previous generation.
From the second the bottom row are the low byte correlation graphs, organized the same way as above.
Galois generator, 0xB400 tap set
This is the generator found in the Wikipedia Galois example. It's performance is not the worst, but it is still definitely not really good.
Galois generator, 0xA55A tap set
One of the decent Galois "seeds" I found. Note that the low part of the 16bit numbers seem to be a lot better than the above, however I couldn't find any Galois "seed" which would fuzz up the high byte.
My generator, 0x7F25 (adc), 0x00DB (eor) seed
This is the best of my generators where the high byte of the EOR value is zero. Limiting the high byte is useful on 8bit machines since then this calculation can be omitted for smaller code and faster execution if the loss of randomness performance is affordable.
My generator, 0x778B (adc), 0x4A8B (eor) seed
This is one of the very good quality seeds by my measurements.
To find seeds with good correlation, I built a small program which would analyse them to some degree, the same way for Galois and mine. The "good quality" examples were pinpointed by that program, and then I tested several of them and selected one from those.
Some conclusions:
The Galois generator seems to be more rigid than mine. On all the correlation graphs definite geometrical patterns are observable (some seeds produce "checkerboard" patterns, not shown here) even if it is not composed of lines. My generator also shows patterns, but with more generations they grow less defined.
A portion of the Galois generator's result which include the bits in the high byte seems to be inherently rigid which property seems to be absent from my generator. This is a weak assumption yet probably needing some more research (to see if this is always so with the Galois generator and not with mine on other bit combinations).
The Galois generator lacks zero (maximal period being 2^16-1).
As of now it is impossible to generate a good set of seeds for my generator above 20 bits.
Later I might get in this subject deeper seeking to test the generator with Diehard, but as of now the lack of the ability of generating large enough seeds for it makes it impossible.
This is some form of a non-linear shift feedback register. I don't know if it has been used as such, but it resembles linear shift feedback registers somewhat. Read this Wikipedia page as an introduction to LSFRs. They are used frequently in pseudo random number generation.
However, your pseudo random number generator is inherently bad in that there is a linear correlation between the highest order bit of a previously generated number and the lowest order bit of a number generated next. You shift the highest bit B out, and then the lowest order bit of the new number will be the XOR or B, the lowest order bit of the additive constant num1 and the lowest order bit of the XORed constant num2, because binary addition is equivalent to exclusive or at the lowest order bit. Most likely your PRNG has other similar deficiencies. Creating good PRNGs is hard.
However, I must admit that the C64 code is pleasingly compact!