I'm trying to compute the next sum:
It is calculated instantly. So I raise the number of points to 24^3 and it still works fast:
But when the number of points is 25^3 it's almost impossible to await the result! Moreover, there is a warning:
Why is it so time-consuming to calculate a finite sum? How can I get a precise answer?
Try
max=24;
Timing[N[
Sum[1/(E^((i^2+j^2+k^2-3)/500)-1),{i,2,max},{j,1,max},{k,1,max}]+
Sum[1/(E^((i^2+j^2+k^2-3)/500)-1),{i,1,1},{j,2,max},{k,1,max}]+
Sum[1/(E^((i^2+j^2+k^2-3)/500)-1),{i,1,1},{j,1,1},{k,2,max}]]]
which quickly returns
{0.143978,14330.9}
and
max=25;
Timing[N[
Sum[1/(E^((i^2+j^2+k^2-3)/500)-1),{i,2,max},{j,1,max},{k,1,max}]+
Sum[1/(E^((i^2+j^2+k^2-3)/500)-1),{i,1,1},{j,2,max},{k,1,max}]+
Sum[1/(E^((i^2+j^2+k^2-3)/500)-1),{i,1,1},{j,1,1},{k,2,max}]]]
which quickly returns
{0.156976,14636.6}
and even
max=50;
Timing[N[
Sum[1/(E^((i^2+j^2+k^2-3)/500)-1),{i,2,max},{j,1,max},{k,1,max}]+
Sum[1/(E^((i^2+j^2+k^2-3)/500)-1),{i,1,1},{j,2,max},{k,1,max}]+
Sum[1/(E^((i^2+j^2+k^2-3)/500)-1),{i,1,1},{j,1,1},{k,2,max}]]]
which quickly returns
{1.36679,16932.5}
Changing your code in this way avoids doing hundreds or thousands of If tests that will almost always result in True. And it potentially uses symbolic algorithms to find those results instead of needing to add up each one of the individual values.
Compare those results and times if you replace Sum with NSum and if you replace /500 with *.002
To try to guess why the times you see suddenly change as you increment the bound, other people have noticed in the past that it appears there are some hard coded bounds inside some of the numerical algorithms and when a range is small enough Mathematica will use one algorithm, but when the range is just large enough to exceed that bound then it will switch to another and potentially slower algorithm. It is difficult or impossible to know exactly why you see this change without being able to inspect the decisions being made inside the algorithms and nobody outside Wolfram gets to see that information.
To get a more precise numerical value you can change N[...] to N[...,64] or N[...,256] or eliminate the N entirely and get a large complicated exact numeric result.
Be cautious with this, check the results carefully to make certain that I have not made any mistakes. And some of this is just guesswork on my part.
Related
I want to build an application that would do something equivalent to running lsof (maybe changing it to output differently, because string processing may mean it is not real time enough) in a loop and then associate each line (entries) with what iteration it was present in, what I will be referring further as frames, as later on it will be better for understanding. My intention with it is that showing the times in which files are open by applications can reveal something about their structure, while not having big impact on their execution, which is often a problem. One problem I have is on processing the output, which would be a table relating "frames X entry", for that I am already anticipating that I will have wildly variable entry lengths. Which can fall in that problem of representing on geometry when you have very different scales, the smaller get infinitely small, while the bigger gets giant and fragmentation makes it even worse; so my question is if plotting libraries deal with this problem and how they do it
The easiest and most well-established technique for showing both small and large values in reasonable detail is a logarithmic scale. Instead of plotting raw values, plot their logarithms. This is notoriously problematic if you can have zero or even negative values, but as I understand your situations all your lengths would be strictly positive so this should work.
Another statistical solution you could apply is to plot ranks instead of raw values. Take all the observed values, and put them in a sorted list. When plotting any single data point, instead of plotting the value itself you look up that value in the list of values (possibly using binary search since it's a sorted list) then plot the index at which you found the value.
This is a monotonous transformation, so small values map to small indices and big values to big indices. On the other hand it completely discards the actual magnitude, only the relative comparisons matter.
If this is too radical, you could consider using it as an ingredient for something more tuneable. You could experiment with a linear combination, i.e. plot
a*x + b*log(x) + c*rank(x)
then tweak a, b and c till the result looks pleasing.
I am working on a Wordle bot to calculate the best first (and subsequent) guess(es). I am assuming all 13,000 possible guesses are equally likely (for now), and I am running into a huge speed issue.
I can not think of a valid way to avoid a triple for loop, each with 13,000 elements. Obviously, this is ridiculously slow and would take about 20 hours on my laptop to compute the best first guess (I assume it would be faster for subsequent guesses due to fewer valid words).
The way I see it, I definitely need the first loop to test each word as a guess.
I need the second loop to find the colors/results given each possible answer with that guess.
Then, I need the third loop to see which guesses are valid given the guess and the colors.
Then, I find the average words remaining for each guess, and choose the one with the lowest average using a priority queue.
I think the first two loops are unavoidable, but maybe there's a way to get rid of the third?
Does anyone have any thoughts or suggestions?
I did a similar thing, for a list with 11,000 or so entries. It took 27 minutes.
I did it using a pregenerated (in one pass) list of the letters in a word,
as a bitfield (i.e. one 32 bit integer) and then did crude testing using
the AND instruction. if a word failed that, it exited the rest of the loop.
I am trying to assess a doc2vec model based on the code from here. Basically, I want to know the percentual of inferred documents are found to be most similar to itself. This is my current code an:
for doc_id, doc in enumerate(cur.execute('SELECT Text FROM Patents')):
docs += 1
doc = clean_text(doc)
inferred_vector = model.infer_vector(doc)
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
rank = [docid for docid, sim in sims].index(doc_id)
ranks.append(rank)
counter = collections.Counter(ranks)
accuracy = counter[0] / docs
This code works perfectly with smaller datasets. However, since I have a huge file with millions of documents, this code becomes too slow, it would take months to compute. I profiled my code and most of the time is consumed by the following line: sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs)).
If I am not mistaken, this is having to measure each document to every other document. I think computation time might be massively reduced if I change this to topn=1 instead since the only thing I want to know is if the most similar document is itself or not. Doing this will basically take each doc (i.e., inferred_vector), measure its most similar document (i.e., topn=1), and then I just see if it is itself or not. How could I implement this? Any help or idea is welcome.
To have most_similar() return only the single most-similar document, that is as simple as specifying topn=1.
However, to know which one document of the millions is the most-similar to a single target vector, the similarities to all the candidates must be calculated & sorted. (If even one document was left out, it might have been the top-ranked one!)
Making sure absolutely no virtual-memory swapping is happening will help ensure that brute-force comparison happens as fast as possible, all in RAM – but with millions of docs, it will still be time-consuming.
What you're attempting is a fairly simple "self-check" as to whether training led to self-consistent model: whether the re-inference of a document creates a vector "very close to" the same doc-vector left over from bulk training. Failing that will indicate some big problems in doc-prep or training, but it's not a true measure of the model's "accuracy" for any real task, and the model's value is best evaluated against your intended use.
Also, because this "re-inference self-check" is just a crude sanity check, there's no real need to do it for every document. Picking a thousand (or ten thousand, or whatever) random documents will give you a representative idea of whether most of the re-inferred vectors have this quality, or not.
Similarly, you could simply check the similarity of the re-inferred vector against the single in-model vector for that same document-ID, and check whether they are "similar enough". (This will be much faster, but could also be done on just a random sample of docs.) There's no magic proper threshold for "similar enough"; you'd have to pick one that seems to match your other goals. For example, using scikit-learn's cosine_similarity() to compare the two vectors:
from sklearn.metrics.pairwise import cosine_similarity
# ...
inferred_vector = model.infer_vector(doc_words)
looked_up_vector = model.dv[doc_id]
self_similarity = cosine_similarity([inferred_vector], [looked_up_vector])[0]
# then check that value against some threshold
(You have to wrap the single vectors in lists as arguments to cosine_similarity(), then access the 0th element of the return value, because it is designed to usually work on larger lists of vectors.)
With this calculation, you wouldn't know if, for example, some of the other stored-doc-vectors are a little closer to your inferred target - but that may not be that important, anyway. The docs might be really similar! And while the original "closest to itself" self-check will fail miserably if there were major defects in training, even a well-trained model will likely have some cases where natural model jitter prevents a "closest to itself" for every document. (With more documents inside the same number of dimensions, or certain corpuses with lots of very-similar documents, this would become more common... but not be a concerning indicator of any model problems.)
I need to tag a load of books with a unique id. Because human error would really mess with the system i need the code to detect if one of the numbers is wrong. That means that no two elements of the code can have a hamming distance of 1. Or have a parity check method or something again such that some errors can be detected. I would normally post what I've done so far, but I don't know where to start really.
Thanks
If you are dealing with human readable data, I would go with something like the Luhn algorithm. This is designed for simple manual computation, as well as resulting in decimal encoded data.
The problem you will have with binary codes is that they will scramble the data a little. So unless you plan to encode your id's in an image, such as a barcode or QR, it's probably not the right choice. Also, optimal decoders are complicated algorithms, they are certainly not practical to check by hand calculation.
If you insist on going with a binary code, then you'll have to decide how many bit errors you want to detect. You'll need a hamming distance of at least 2 to detect an error, otherwise a single bit flip will result in the code being transformed into another equally valid code, and the error will go unnoticed. If you want to correct N errors, then you'll need to choose a code with a distance of 2N+1.
If you are planning to encode hexadecimal digits, then you'll need 4-bits per digit storage, which will require a code with 9-bits of redundancy per message in order to correct a single digit error. I'm not even sure such a perfect codes exists, and in reality you might find you need more redundancy to equally protect all bits.
I have a set of results (numbers), and I would like to know if a given result is very good/bad compared to the previous results (only previous).
Each result is a number € IR+. For example if you have the sequence 10, 11, 10, 9.5, 16 then 16 is clearly a very good result compared to the previous ones. I would like to find an algorithm to detect this situation (very good/bad result compared to previous results).
A more general way to state this problem is : how to determine if a point - in a given set of data - is scattered from the rest of the data.
Now, that might look like a peak detection problem, but since the previous values are not constant there are many tiny peaks, and I only want the big ones.
My first idea was to compute the mean and determine the standard deviation but it is quite limited. Indeed, if there is one huge/low value in the previous results it will change dramatically the mean/stadard deviation and the next results will have to be even greater/lower to beat the standard deviation (in order to be detected) and therefor many points will not be (properly) detected.
I'm quite sure that must a well known problem.
Can anyone help me on this ?
This kind of problem is called Anomaly Detection.