I am using Keras 2.3.1 For a ternary (3 classes) classification problem. I use the keras.metrics.AUC() as the metric. I see val_auc_3 at the end of each epoch. I am wondering to know if "val_auc_3" is the average AUC (i.e. [auc(class1 vs all) + auc(class2 vs all) +auc(class3 vs all)]/3) or is is the AUC of the third class (i.e. just third-class vs. all)?
Is there any way to disregard one the AUCs?
Related
I have noticed that my gensim Doc2Vec (DBOW) model is sensitive to document tags. My understanding was that these tags are cosmetic and so they should not influence the learned embeddings. Am I misunderstanding something? Here is a minimal example:
from gensim.test.utils import common_texts
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
import numpy as np
import os
os.environ['PYTHONHASHSEED'] = '0'
reps = []
for a in [0,500]:
documents = [TaggedDocument(doc, [i + a])
for i, doc in enumerate(common_texts)]
model = Doc2Vec(documents, vector_size=100, window=2, min_count=0,
workers=1, epochs=10, dm=0, seed=0)
reps.append(np.array([model.docvecs[k] for k in range(len(common_texts))])
reps[0].sum() == reps[1].sum()
This last line returns False. I am working with gensim 3.8.3 and Python 3.5.2. More generally, is there any role that the values of the tags play (assuming they are unique)? I ask because I have found that using different tags for documents in a classification task leads to widely varying performance.
Thanks in advance.
First & foremost, your test isn't even comparing vectors corresponding to the same texts!
In run #1, the vector for the 1st text in in model.docvecs[0]. In run #2, the vector for the 1st text is in model.docvecs[1].
And, in run #2, the vector at model.docvecs[0] is just a randomly-initialized, but never-trained, vector - because none of the training texts had a document tag of (int) 0. (If using pure ints as the doc-tags, Doc2Vec uses them as literal indexes - potentially leaving any unused slots less than your highest tag allocated-and-initialized, but never-trained.)
Since common_texts only has 11 entries, by the time you reach run #12, all the vectors in your reps array of the first 11 vectors are garbage uncorrelated with any of your texts/
However, even after correcting that:
As explained in the Gensim FAQ answer #11, determinism in this algorithm shouldn't generally be expected, given many sources of potential randomness, and the fuzzy/approximate nature of the whole approach. If you're relying on it, or testing for it, you're probably making some unwarranted assumptions.
In general, tests of these algorithms should be evaluating "roughly equivalent usefulness in comparative uses" rather than "identical (or even similar) specific vectors". For example, a test whether apple and orange are roughly at the same positions in each others' nearest-neighbor rankings makes more sense than checking their (somewhat arbitrary) exact vector positions or even cosine-similarity.
Additionally:
tiny toy datasets like common_texts won't show the algorithm's usual behavior/benefits
PYTHONHASHSEED is only consulted by the Python interpreter at startup; setting it from Python can't have any effect. But also, the kind of indeterminism it introduces only comes up with separate interpreter launches: a tight loop within a single interpreter run like this wouldn't be affected by that in any case.
Have you checked the magnitude of the differences?
Just running:
delta = reps[0].sum() - reps[1].sum()
for the aggregate differences results with -1.2598932e-05 when I run it.
Comparison dimension-wise:
eps = 10**-4
over = (np.abs(diff) <= eps).all()
Returns True on a vast majority of the runs which means that you are getting quite reproducible results given the complexity of the calculations.
I would blame numerical stability of the calculations or uncontrolled randomness. Even though you do try to control the random seed, there is a different random seed in NumPy and different in random standard library so you are not controlling for all of the sources of randomness. This can also have an influence on the results but I did not check the actual implementation in gensim and it's dependencies.
Change
import os
os.environ['PYTHONHASHSEED'] = '0'
to
import os
import sys
hashseed = os.getenv('PYTHONHASHSEED')
if not hashseed:
os.environ['PYTHONHASHSEED'] = '0'
os.execv(sys.executable, [sys.executable] + sys.argv)
I am running a structural equation model with lavaan.survey package to account for complex survey design.
I have three latent and two manifest exogenous variables, and a manifest endogenous variable. All variables are ordinal.
I run sem with "DWLS" estimator followed by the same estimator with lavaan.survey function. This is giving me weird results, with large standard errors and p value close to 1.
I don't follow the 2-step estimation (with and without the survey procedure) employed in lavaan.survey. Do I need "DWLS" in both estimation steps. Or can I use robust Maximum Likelihood for the final estimation?
Are you referring to R package or in Amos ?
As you know , Amos is best software for SEM.
Fouad,
Statistical Consultant
I would like to understand the general idea behind hybrid modelling (in particular state events) from a numerical point of view (although I am not a mathematician :)). Given the following Modelica model:
model BouncingBall
constant Real g=9.81
Real h(start=1);
Real v(start=0);
equation
der(h)=v;
der(v)=-g;
algorithm
when h < 0 then
reinit(v,-pre(v));
end when;
end BouncingBall;
I understand the concept of when and reinit.
The equation in the when statement are only active when the condition become true right?
Let's assume that the ball would hit the floor at exactly 2sec. Since I am using multi-step solver does that mean that the solver "goes beyond 2 seconds", recognizes that h<0 (lets assume at simulation time = 2.5sec , h = -0.7). What does this mean "The time for the event is searched using a crossing function? Is there a simple explanation(example)?
Is the solver now going back? Taking a smaller step-size?
What does the pre() operation mean in that context?
noEvent(): "Expressions are taken literally instead of generating crossing functions. Since there is no crossing function, there is no requirement tat the expression can be evaluated beyond the event limit": What does that mean? Given the same example with the bouncing ball: The solver detects at time 2.5 that h = 0.7. Whats the difference between with and without noEvent()?
Yes, the body of when is only executed at events.
Simple view: The solver takes steps, and then uses a continuous extension to generate a (smooth) interpolation formula for the previous step. That interpolation formula can be used to generate a plot, and also for finding the first point where h has crossed zero (likely 2.000000001). An event iteration is then done at that interpolated point - and afterwards the solver is restarted.
I wouldn't say that the solver goes back. It takes a partial step and then continues forward. Some solvers need to reduce the step-size a lot after the event - others don't.
pre(x) is set to the value of x before the event.
noEvent(h<0) basically means evaluate the expression as written without all the bells-and-whistles of crossing functions. You cannot use when noEvent(h<0) then
There are many additional point:
If you are familiar with Sturm-sequences or control theory you might realize that it is not necessary to interpolate a formula to determine if it crossed zero or not in an interval (and some tools use that). The fact that the function is not necessarily smooth makes it a bit more complicated, and also means that derivative-tests cannot be used.
How much the solver is reset depends on the kind of solver. One-step solvers (Runge-Kutta) can be restarted directly as if virtually nothing happened, whereas multi-step solvers (BDF/Adams - such as dassl/lsodar/cvode) need to start with lower order and smaller step-size.
In trying to port over some Matlab code to F#, I'm trying to make sure the translations are accurate. As of now, there are cases where I'm not completely sure whether there are mistakes. Since a lot of the code is statistical in nature, it would be convenient to be able to seed the F# generators to the same state as Matlab's. It would also help with triangulating the exact equations that are wrong. Wanted to ask before I started dumping Matlab generated random numbers to csv files and solving this issue in a manual way.
This is not a definitive answer as probably implementing your own random number generator in matlab and F# should yield the most reliable results. You are also bound to bump into issues of thread safety in .NET, and the shapes of matrices in matlab. For example
In matlab:
rng(200,'twister')
rand(1,5)
ans =
0.9476 0.2265 0.5944 0.4283 0.7641
In F#:
open MathNet.Numerics.Random
let random1b = MersenneTwister(200)
random1b.NextDoubles(5)
val it : float [] = [|0.9476322592; 0.4941436297; 0.2265474238;
0.1485590497; 0.5944201448|]
The 1st, 3rd, and 5th random numbers do match.
Now it's possible you can replicate this somehow by playing around with different versions and/or F# and matlab array dimensions.
The MathNet Random Docs.
I'm doing some Linear programming exercises for the course of Algorithms, and in doing this I'm solving manually many operations with fractions. In doing this I realized that a human being don't suffer from numeric instability: we just keep values in fractional representation, and we finally evaluate (possibly by using a calculator) the value of expressions.
Is there any technique that does this automatically?
Im thinking of something which achieves some kind of symbolic computation, simplifies the numbers internally and finally yields the value only during the evaluation of an expression.
Boost contains a rational number library here which might be of help.
In Python you can have a look at fractions:
import fractions
a = fractions.Fraction(2,3)
a*2
# Fraction(4, 3)
a**2
# Fraction(4, 9)
'Value: %.2f' % a
# 'Value: 0.67'