cross validation output interpretation from survival analysis - cross-validation

I have performed a cross validation using the errorest function, but am not sure how to interpret the output Brier score. Is there a way to visualise the cross validation? Any other suggestions how to perform an visualise the CV.
library(ipred)
df.t <- structure(list(time = c(1796, 1644.04166666667,
606.041666666667, 1327.04166666667, 665, 2461, 1824, 1554.04166666667,
601.958333333333, 1638.95833333333), status = c(0L,
0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L), Comb2 = c("Low", "Low",
"High", "Low", "High", "Low", "Low", "High", "High", "Low")), row.names = c("1025",
"1101", "1198", "1330", "1393", "1428", "1473", "1676", "175",
"1754"), class = "data.frame")
err <- errorest(Surv(time, status) ~Comb2, data=df.t, model=survfit,
predict=NULL, est.para=control.errorest(k=5))
out:
Call:
errorest.data.frame(formula = Surv(time, status) ~ Comb2, data = df.t.top,
model = survfit, predict = NULL, est.para = control.errorest(k = 5))
5-fold cross-validation estimator of Brier's score
Brier's score: 0.2622

I can't get your code working so I can't reproduce the above, however some comments:
The brier score is a scoring rule and in a survival setting there is no known baseline value. What that means is that you actually cannot interpret this value on its own, '0.2622' could be 'good' or 'bad', but without another model to compare to it is meaningless.
Visualising a score across folds also does not provide useful information. You should only ever look at the mean score across all folds.
Visualising the CV also does not make sense. The purpose of CV is to get the average prediction across all folds, inspection between folds does not provide useful information.
Since you're asking for other suggestions I will plug my package which can easily obtain predictions, compare multiple models, and also visualise prediction error curves. Here's an example with your data:
library(mlr3); library(mlr3proba)
set.seed(1)
# create data
df.t <- data.frame(
time = c(1796, 1644.04166666667,
606.041666666667, 1327.04166666667, 665, 2461, 1824, 1554.04166666667,
601.958333333333, 1638.95833333333),
status = c(0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L),
Comb2 = factor(c("Low", "Low", "High", "Low", "High",
"Low", "Low", "High", "High", "Low"))
)
rownames(df.t) <- c("1025", "1101", "1198", "1330", "1393", "1428", "1473", "1676",
"175", "1754")
# convert data to 'task'
task = TaskSurv$new("dft", df.t, time = "time", event = "status")
# get Cox PH learner
cox = lrn("surv.coxph")
# use 5-fold cross-validation
cv = rsmp("cv", folds = 5)
# resample
rr = resample(task, cox, cv)
# get brier/graf score
rr$aggregate(msr("surv.brier"))
# get prediction
rr$prediction()
# inspect individual predictions
rr$predictions("test")
# compare to Kaplan-Meier baseline
km = lrn("surv.kaplan")
# make benchmark experiment
design = benchmark_grid(task, list(cox, km), cv)
# run experiment
bm = benchmark(design)
# get brier score
bm$aggregate(msr("surv.brier"))

Related

I need help changing the font in a package

I am trying to change the font in a table I made in stargazer() to Times New Roman and can't figure out how to do that. I made 4 multiple regression models and put them all into a single table.
stargazer(reg3FModel1, reg3FModel2, reg3FModel3, reg3FModel4,
single.row = TRUE,
digits = 2,
font.size = "large",
type = "text",
model.numbers = FALSE,
dep.var.caption = "Response variables",
omit.stat = c("rsq", "adj.rsq", "ser", "f", "n"),
title = "Regression Analysis with 3-Factor Model")

Technical Analyis (MACD) for crpto trading

Background:
I have writing a crypto trading bot for fun and profit.
So far, it connects to an exchange and gets streaming price data.
I am using this price to create a technical indicator (MACD).
Generally for MACD, it is recommended to use closing prices for 26, 12 and 9 days.
However, for my trading strategy, I plan to use data for 26, 12 and 9 minutes.
Question:
I am getting multiple (say 10) price ticks in a minute.
Do I simply average them and round the time to the next minute (so they all fall in the same minute bucket)? Or is there is better way to handle this.
Many Thanks!
This is how I handled it. Streaming data comes in < 1s period. Code checks for new low and high during streaming period and builds the candle. Probably ugly since I'm not a trained developer, but it works.
Adjust "...round('20s')" and "if dur > 15:" for whatever candle period you want.
def on_message(self, msg):
df = pd.json_normalize(msg, record_prefix=msg['type'])
df['date'] = df['time']
df['price'] = df['price'].astype(float)
df['low'] = df['low'].astype(float)
for i in range(0, len(self.df)):
if i == (len(self.df) - 1):
self.rounded_time = self.df['date'][i]
self.rounded_time = pd.to_datetime(self.rounded_time).round('20s')
self.lhigh = self.df['price'][i]
self.lhighcandle = self.candle['high'][i]
self.llow = self.df['price'][i]
self.lowcandle = self.candle['low'][i]
self.close = self.df['price'][i]
if self.lhigh > self.lhighcandle:
nhigh = self.lhigh
else:
nhigh = self.lhighcandle
if self.llow < self.lowcandle:
nlow = self.llow
else:
nlow = self.lowcandle
newdata = pd.DataFrame.from_dict({
'date': self.df['date'],
'tkr': tkr,
'open': self.df.price.iloc[0],
'high': nhigh,
'low': nlow,
'close': self.close,
'vol': self.df['last_size']})
self.candle = self.candle.append(newdata, ignore_index=True).fillna(0)
if ctime > self.rounded_time:
closeit = True
self.en = time.time()
if closeit:
dur = (self.en - self.st)
if dur > 15:
self.st = time.time()
out = self.candle[-1:]
out.to_sql(tkr, cnx, if_exists='append')
dat = ['tkr', 0, 0, 100000, 0, 0]
self.candle = pd.DataFrame([dat], columns=['tkr', 'open', 'high', 'low', 'close', 'vol'])
As far as I know, most or all technical indicator formulas rely on same-sized bars to produce accurate and meaningful results. You'll have to do some data transformation. Here's an example of an aggregation technique that uses quantization to get all your bars into uniform sizes. It will convert small bar sizes to larger bar sizes; e.g. second to minute bars.
// C#, see link above for more info
quoteHistory
.OrderBy(x => x.Date)
.GroupBy(x => x.Date.RoundDown(newPeriod))
.Select(x => new Quote
{
Date = x.Key,
Open = x.First().Open,
High = x.Max(t => t.High),
Low = x.Min(t => t.Low),
Close = x.Last().Close,
Volume = x.Sum(t => t.Volume)
});
See Stock.Indicators for .NET for indicators and related tools.

Discrepancies in gensim doc2vec embedding vectors

I use gensim Doc2Vec package to train doc2vec embeddings. I would expect that two models trained with the identical parameters and data would have very close values of the doc2vec vectors. However, in my experience it is only true with doc2vec trained in the PV-DBOW without training word embedding (dbow_words = 0).
For PV-DM and for PV-DBOW with dbow_words = 1, i.e. every case the word embedding are trained along with doc2vec, the doc2vec embedding vectors for identically trained models are fairly different.
Here is my code
from sklearn.datasets import fetch_20newsgroups
from gensim import models
import scipy.spatial.distance as distance
import numpy as np
from nltk.corpus import stopwords
from string import punctuation
def clean_text(texts, min_length = 2):
clean = []
#don't remove apostrophes
translator = str.maketrans(punctuation.replace('\'',' '), ' '*len(punctuation))
for text in texts:
text = text.translate(translator)
tokens = text.split()
# remove not alphabetic tokens
tokens = [word.lower() for word in tokens if word.isalpha()]
# filter out stop words
stop_words = stopwords.words('english')
tokens = [w for w in tokens if not w in stop_words]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= min_length]
tokens = ' '.join(tokens)
clean.append(tokens)
return clean
def tag_text(all_text, tag_type =''):
tagged_text = []
for i, text in enumerate(all_text):
tag = tag_type + '_' + str(i)
tagged_text.append(models.doc2vec.TaggedDocument(text.split(), [tag]))
return tagged_text
def train_docvec(dm, dbow_words, min_count, epochs, training_data):
model = models.Doc2Vec(dm=dm, dbow_words = dbow_words, min_count = min_count)
model.build_vocab(tagged_data)
model.train(training_data, total_examples=len(training_data), epochs=epochs)
return model
def compare_vectors(vector1, vector2):
cos_distances = []
for i in range(len(vector1)):
d = distance.cosine(vector1[i], vector2[i])
cos_distances.append(d)
print (np.median(cos_distances))
print (np.std(cos_distances))
dataset = fetch_20newsgroups(shuffle=True, random_state=1,remove=('headers', 'footers', 'quotes'))
n_samples = len(dataset.data)
data = clean_text(dataset.data)
tagged_data = tag_text(data)
data_labels = dataset.target
data_label_names = dataset.target_names
model_dbow1 = train_docvec(0, 0, 4, 30, tagged_data)
model_dbow2 = train_docvec(0, 0, 4, 30, tagged_data)
model_dbow3 = train_docvec(0, 1, 4, 30, tagged_data)
model_dbow4 = train_docvec(0, 1, 4, 30, tagged_data)
model_dm1 = train_docvec(1, 0, 4, 30, tagged_data)
model_dm2 = train_docvec(1, 0, 4, 30, tagged_data)
compare_vectors(model_dbow1.docvecs, model_dbow2.docvecs)
> 0.07795828580856323
> 0.02610614028793008
compare_vectors(model_dbow1.docvecs, model_dbow3.docvecs)
> 0.6476179957389832
> 0.14797587172616306
compare_vectors(model_dbow3.docvecs, model_dbow4.docvecs)
> 0.19878000020980835
> 0.06362519480831186
compare_vectors(model_dm1.docvecs, model_dm2.docvecs)
> 0.13536489009857178
> 0.045365127475424386
compare_vectors(model_dbow1.docvecs, model_dm1.docvecs)
> 0.6358324736356735
> 0.15150255674571805
UPDATE
I tried, as suggested by gojomo, to compare the differences between the vectors, and, unfortunately, those are even worse:
def compare_vector_differences(vector1, vector2):
diff1 = []
diff2 = []
for i in range(len(vector1)-1):
diff1.append( vector1[i+1] - vector1[i])
for i in range(len(vector2)-1):
diff2[i].append(vector2[i+1] - vector2[i])
cos_distances = []
for i in range(len(diff1)):
d = distance.cosine(diff1[i], diff2[i])
cos_distances.append(d)
print (np.median(cos_distances))
print (np.std(cos_distances))
compare_vector_differences(model_dbow1.docvecs, model_dbow2.docvecs)
> 0.1134452223777771
> 0.02676398444178949
compare_vector_differences(model_dbow1.docvecs, model_dbow3.docvecs)
> 0.8464127033948898
> 0.11423789350773429
compare_vector_differences(model_dbow4.docvecs, model_dbow3.docvecs)
> 0.27400463819503784
> 0.05984108730423529
SECOND UPDATE
This time, after I finally understood gojomo, the things look fine.
def compare_distance_differences(vector1, vector2):
diff1 = []
diff2 = []
for i in range(len(vector1)-1):
diff1.append( distance.cosine(vector1[i+1], vector1[i]))
for i in range(len(vector2)-1):
diff2.append( distance.cosine(vector2[i+1], vector2[i]))
diff_distances = []
for i in range(len(diff1)):
diff_distances.append(abs(diff1[i] - diff2[i]))
print (np.median(diff_distances))
print (np.std(diff_distances))
compare_distance_differences(model_dbow1.docvecs, model_dbow2.docvecs)
>0.017469733953475952
>0.01659284710785352
compare_distance_differences(model_dbow1.docvecs, model_dbow3.docvecs)
>0.0786697268486023
>0.06092163158218411
compare_distance_differences(model_dbow3.docvecs, model_dbow4.docvecs)
>0.02321992814540863
>0.023095123172320778
The doc-vectors (or word-vectors) of Doc2Vec & Word2Vec models are only meaningfully comparable to other vectors that were co-trained, in the same interleaved training sessions.
Otherwise, randomness introduced by the algorithms (random-initialization & random-sampling) and by slight differences in training ordering (from multithreading) will cause the trained positions of individual vectors to wander to arbitrarily different positions. Their relative distances/directions, to other vectors that shared interleaved training, should be about as equally-useful from one model to the next.
But there's no one right place for such a vector, and measuring the differences between the vector for document '1' (or word 'foo') in one model, and the corresponding vector in another model, isn't reflective of anything the models/algorithms are trained to provide.
There's more information in the Gensim FAQ:
Q11: I've trained my Word2Vec/Doc2Vec/etc model repeatedly using the exact same text corpus, but the vectors are different each time. Is there a bug or have I made a mistake?

Sklearn RandomizedSearchCV suddenly stuck

I tried to do a RandomizedSearchCV for a SVM model but it seems to take forever. It works fine for KNN. I found the process stuck somewhere after finishing certain tasks. The below is my code:
# SVC Parameter Tuning
svc_params = {'C': np.power(10, np.arange(-5., 1.)),
'kernel': ['rbf', 'linear', 'sigmoid', 'poly'],
'degree': np.arange(3, 21),
'coef0' : np.linspace(0, 1, 100),
'shrinking': [True, False],
'class_weight' : ['balanced', None]}
svc = RandomizedSearchCV(SVC(),
svc_params,
cv = 5,
scoring = 'roc_auc',
n_jobs = 128,
n_iter = 100,
verbose = 2)
after some results, the process stuck.
[CV] kernel=poly, C=0.0001, degree=20, coef0=0.848484848485,
shrinking=True, class_weight=balanced, total= 11.1s
[CV] kernel=poly, C=0.0001, degree=20, coef0=0.848484848485,
shrinking=True, class_weight=balanced, total= 11.0s
[CV] kernel=poly, C=0.0001, degree=20, coef0=0.848484848485,
shrinking=True, class_weight=balanced, total= 11.5s
I am really out of clue. Thanks for your help!

Dirichlet vs Binomial in pymc3

I am having trouble sampling from a Dirichlet/Multinomial distribution with pymc3.
I tried to create a simple test-case to recreate a Beta/Binomial using Dirichlet/Multinomial with n=2, but I can't get it to work.
Below I have some code that works for Binomial but fails for Multinomial.
One of the obvious differences is that the Multinomial model is more constrained:
i.e. to start, rating is set to 10 in the Binomial model, and [10,10] in the Multinomial.
The pymc3 Dirichlet code does say "Only the first k-1 elements of x are expected" but only arrays of shape 2 seem to work in my code.
The output shows that num_friends and rating are being sampled in the Binomial case, but not in the Multinomial case. friends_ratings is being sampled in both. Thanks!
Oh, also Dirichlet('d', np.array([1,1])) crashes with "Floating point error 8". It only appears to fail when two integers of value 1 are passed in. np.array([1.,1.]) works.
import pymc as pm
import numpy as np
print "TEST BINOMIAL"
with pm.Model() as model:
friends_ratings = pm.Beta('friends_ratings', alpha=1, beta=2)
num_friends = pm.DiscreteUniform('num_friends', lower=0, upper=100)
rating = pm.Binomial('rating', n=num_friends, p=friends_ratings)
step = pm.Metropolis([num_friends, friends_ratings, rating])
start = {"friends_ratings":.5, "num_friends":20, 'rating':10}
tr = pm.sample(5, step, start=start, progressbar=False)
print "friends", [tr[i]['num_friends'] for i in range(len(tr))]
print "friends_ratings", [tr[i]['friends_ratings'] for i in range(len(tr))]
print "rating", [tr[i]['rating'] for i in range(len(tr))]
print "TEST DIRICHLET"
with pm.Model() as model:
friends_ratings = pm.Dirichlet('friends_ratings', np.array([1.,1.]), shape=2)
num_friends = pm.DiscreteUniform('num_friends', lower=0, upper=100)
rating = pm.Multinomial('rating', n=num_friends, p=friends_ratings, shape=2)
step = pm.Metropolis([num_friends, friends_ratings, rating])
start = {'friends_ratings': np.array([0.5,0.5]), 'num_friends': 20, 'rating': [10,10]}
tr = pm.sample(5, step, start=start, progressbar=False)
print "friends", [tr[i]['num_friends'] for i in range(len(tr))]
print "friends_ratings", [tr[i]['friends_ratings'] for i in range(len(tr))]
print "rating", [tr[i]['rating'] for i in range(len(tr))]
Output:
TEST BINOMIAL
friends [22.0, 24.0, 24.0, 23.0, 23.0]
friends_ratings [0.5, 0.5, 0.41, 0.41, 0.41]
ratingf [10.0, 11.0, 11.0, 11.0, 11.0]
TEST DIRICHLET
friends [20.0, 20.0, 20.0, 20.0, 20.0]
friends_ratings [array([ 0.51369621, 1.490608 ]), ... ]
rating [array([ 10., 10.]), array([ 10., 10.]), ... ]
PyMC3 does not automatically normalize the Dirichlet. So far you have to do this explicitly using simplextransform. See here for an example.
There is an issue of making this transform automatic though: https://github.com/pymc-devs/pymc3/issues/315
EDIT (9/14/2015): PyMC3 now automatically transforms the dirichlet distribution (as any other distribution). So you don't need to specify that manually anymore.

Resources