volemont/insights:chart.EquityCurve.R: a bug in graphing peaks of cumulative return? - algorithmic-trading

I came cross a function of graphing cumulative return of a strategy and the peaks of the return in a great example of combining shiny and quantstrat, thanks to Simon Otziger. The source code is here. The code works fine most of time, but for some data it won't graph the peaks properly.
The code is simplified but the key logic is not changed. I ran the code with three set of data (cumPNL1, cumPNL2, cumPNL3) copied from three example strategies, in which the first data will cause the code to fail to graph peaks properly.
I ran the following codes with cumPNL1, cumPNL2, cumPNL3 separately. with both cumPNL2 and cumPNL3 the code can produce cumulative return line and peak points successfully. however, with cumPNL1 the code can only produce line, but peaks are not at the right positions.
I noticed that both peakIndex based on cumPNL2 and cumPNL3 have their first value being TRUE, so when I change the code by adding a line peakIndex[1] <- TRUE, cumPNL1 will work fine with the modified code.
Though now it works with modified code, I have no idea why it is behaving like this. Could anyone have a look? Thanks
cumPNL1 <- c(-193,-345,-406,-472,-562,-543,-450,-460,-544,-659,-581,-342,-384,276,-858,-257.99)
cumPNL2 <- c(35.64,4.95,-2.97,-6.93,11.88,-19.8,-26.73,-39.6,-49.5,-50.49,-51.48,-48.51,-50.49,-55.44,143.55,770.22,745.47,691.02,847.44,1141.47,1007.82,1392.93,1855.26,1863.18,2536.38,2778.93,2811.6,2859.12,2417.58)
cumPNL3 <- c(35.64,4.95,-2.97,-6.93,11.88,-19.8,-26.73,-39.6,-49.5,-50.49,-51.48,-48.51,-50.49,-55.44,143.55,770.22,745.47,691.02,847.44,1141.47,1007.82,1392.93,1855.26,1863.18,2536.38,2778.93,2811.6,2859.12,2417.58)
peakIndex <- c(cumPNL3[1] > 0, diff(cummax(cumPNL3)) > 0)
# peakIndex[1] <- TRUE
dev.new()
plot(cumPNL3, type='n', xlab="index of trades", ylab="returns in cash", main="cumulative returns and peaks")
grid()
lines(cumPNL3)
points(cbind(1 : length(cumPNL3), cumPNL3)[peakIndex, ],
pch=19, col='green', cex=0.6)
legend(
x='bottomright', inset=0.1,
legend=c('Net Profit','Peaks'),
lty=c(1, NA), pch=c(NA, 19),
col=c('black','green')
)

cumPNL1 has a single peak and R reduces the dimension from a numerical matrix to a numerical vector of length 2. The points function plots the two numerical vector values on the y-axis using the x-axis index 1 and 2:
peakIndex1 <- c(cumPNL1[1] > 0, diff(cummax(cumPNL1)) > 0)
peakIndex3 <- c(cumPNL3[1] > 0, diff(cummax(cumPNL3)) > 0)
str(cbind(1 : length(cumPNL1), cumPNL1)[peakIndex1,])
str(cbind(1 : length(cumPNL3), cumPNL3)[peakIndex3,])
Output:
> str(cbind(1 : length(cumPNL1), cumPNL1)[peakIndex1,])
num [1:12, 1:2] 1 15 16 19 20 22 23 24 25 26 ...
- attr(*, "dimnames")=List of 2
..$ : NULL
..$ : chr [1:2] "" "cumPNL1"
> str(cbind(1 : length(cumPNL3), cumPNL3)[peakIndex3,])
Named num [1:2] 14 276
- attr(*, "names")= chr [1:2] "" "cumPNL3"
Usually setting plot = FALSE preserves the object, e.g., str(cbind(1 : length(cumPNL3), cumPNL3)[peakIndex3, drop = FALSE]), which somehow does not work in this case. However, changing the points line to the following fixes the problem:
points(seq_along(cumPNL3)[peakIndex], cumPNL3[peakIndex], pch = 19,
col = 'green', cex = 0.6)
Thanks for reporting the issue. I will push the fix to GitHub tomorrow.

Related

How to solve the error ' [not a vector ]'

I ran this code to find the norm of some fundamnetal units of a biqaudratic number field, but I faced following problem
for (q=5, 200, for(p=q+1, 200, if (isprime(p)==1 && isprime(q)==1 ,k1=bnfinit(y^2-2*p,1); k2=bnfinit(y^2-q,1); k3=bnfinit(y^2-2*p*q,1); ep1=k1[8][5][1]; ep2=k2[8][5][1]; ep3=k3[8][5][1]; normep1=nfeltnorm(k1,ep1); normep2=nfeltnorm(k2,ep2); normep3=nfeltnorm(k3,ep3); li=[[q,p], [normep1, normep2, normep3]]; lis4=concat(lis4,[li]))))
and it works for small p and q. However, when I ran that for p and q greater than 150, it gives the following error:
First, I didn't use the flag=1 for bnf, but after adding that, still I get the same error.
Please, do not use indexing like ...[8][5][1] to get the fundamental units (FU). It seems that bnfinit omits FU matrix for some p and q. Instead, use the member function fu to receive FU. Please, find the example below:
> [q, p] = [23, 109];
> k = bnfinit(y^2 - 2*p*q, 1);
> k[8][5]
[;]
> k[8][5][1] \\ you will get the error here trying to index the empty matrix.
...
incorrect type in _[_] OCcompo1 [not a vector] (t_MAT).
> k.fu
[Mod(-355285121749346859670064114879166870*y - 25157598731408198132266996072608016699, y^2 - 5014)]
> norm(k.fu[1])
1

Robust Standard Errors in lm() using stargazer()

I have read a lot about the pain of replicate the easy robust option from STATA to R to use robust standard errors. I replicated following approaches: StackExchange and Economic Theory Blog. They work but the problem I face is, if I want to print my results using the stargazer function (this prints the .tex code for Latex files).
Here is the illustration to my problem:
reg1 <-lm(rev~id + source + listed + country , data=data2_rev)
stargazer(reg1)
This prints the R output as .tex code (non-robust SE) If i want to use robust SE, i can do it with the sandwich package as follow:
vcov <- vcovHC(reg1, "HC1")
if I now use stargazer(vcov) only the output of the vcovHC function is printed and not the regression output itself.
With the package lmtest() it is possible to print at least the estimator, but not the observations, R2, adj. R2, Residual, Residual St.Error and the F-Statistics.
lmtest::coeftest(reg1, vcov. = sandwich::vcovHC(reg1, type = 'HC1'))
This gives the following output:
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.54923 6.85521 -0.3719 0.710611
id 0.39634 0.12376 3.2026 0.001722 **
source 1.48164 4.20183 0.3526 0.724960
country -4.00398 4.00256 -1.0004 0.319041
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
How can I add or get an output with the following parameters as well?
Residual standard error: 17.43 on 127 degrees of freedom
Multiple R-squared: 0.09676, Adjusted R-squared: 0.07543
F-statistic: 4.535 on 3 and 127 DF, p-value: 0.00469
Did anybody face the same problem and can help me out?
How can I use robust standard errors in the lm function and apply the stargazer function?
You already calculated robust standard errors, and there's an easy way to include it in the stargazeroutput:
library("sandwich")
library("plm")
library("stargazer")
data("Produc", package = "plm")
# Regression
model <- plm(log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp,
data = Produc,
index = c("state","year"),
method="pooling")
# Adjust standard errors
cov1 <- vcovHC(model, type = "HC1")
robust_se <- sqrt(diag(cov1))
# Stargazer output (with and without RSE)
stargazer(model, model, type = "text",
se = list(NULL, robust_se))
Solution found here: https://www.jakeruss.com/cheatsheets/stargazer/#robust-standard-errors-replicating-statas-robust-option
Update I'm not so much into F-Tests. People are discussing those issues, e.g. https://stats.stackexchange.com/questions/93787/f-test-formula-under-robust-standard-error
When you follow http://www3.grips.ac.jp/~yamanota/Lecture_Note_9_Heteroskedasticity
"A heteroskedasticity-robust t statistic can be obtained by dividing an OSL estimator by its robust standard error (for zero null hypotheses). The usual F-statistic, however, is invalid. Instead, we need to use the heteroskedasticity-robust Wald statistic."
and use a Wald statistic here?
This is a fairly simple solution using coeftest:
reg1 <-lm(rev~id + source + listed + country , data=data2_rev)
cl_robust <- coeftest(reg1, vcov = vcovCL, type = "HC1", cluster = ~
country)
se_robust <- cl_robust[, 2]
stargazer(reg1, reg1, cl_robust, se = list(NULL, se_robust, NULL))
Note that I only included cl_robust in the output as a verification that the results are identical.

Pinescript duplicate alerts

I have created a very basic script in pinescript.
study(title='Renko Strat w/ Alerts', shorttitle='S_EURUSD_5_[MakisMooz]', overlay=true)
rc = close
buy_entry = rc[0] > rc[2]
sell_entry = rc[0] < rc[2]
alertcondition(buy_entry, title='BUY')
alertcondition(sell_entry, title='SELL')
plot(buy_entry/10)
The problem is that I get a lot of duplicate alerts. I want to edit this script so that I only get a 'Buy' alert when the previous alert was a 'Sell' alert and visa versa. It seems like such a simple problem, but I have a hard time finding good sources to learn pinescript. So, any help would be appreciated. :)
One way to solve duplicate alters within the candle is by using "Once Per Bar Close" alert. But for alternative alerts (Buy - Sell) you have to code it with different logic.
I Suggest to use Version 3 (version shown above the study line) than version 1 and 2 and you can accomplish the result by using this logic:
buy_entry = 0.0
sell_entry = 0.0
buy_entry := rc[0] > rc[2] and sell_entry[1] == 0? 2.0 : sell_entry[1] > 0 ? 0.0 : buy_entry[1]
sell_entry := rc[0] < rc[2] and buy_entry[1] == 0 ? 2.0 : buy_entry[1] > 0 ? 0.0 : sell_entry[1]
alertcondition(crossover(buy_entry ,1) , title='BUY' )
alertcondition(crossover(sell_entry ,1), title='SELL')
You'll have to do it this way
if("Your buy condition here")
strategy.entry("Buy Alert",true,1)
if("Your sell condition here")
strategy.entry("Sell Alert",false,1)
This is a very basic form of it but it works.
You were getting duplicate alerts because the conditions were fulfulling more often. But with strategy.entry(), this won't happen
When the sell is triggered, as per paper trading, the quantity sold will be double (one to cut the long position and one to create a short position)
PS :You will have to add code to create alerts and enter this not in study() but strategy()
The simplest solution to this problem is to use the built-in crossover and crossunder functions.
They consider the entire series of in-this-case close values, only returning true the moment they cross rather than every single time a close is lower than the close two candles ago.
//#version=5
indicator(title='Renko Strat w/ Alerts', shorttitle='S_EURUSD_5_[MakisMooz]', overlay=true)
c = close
bool buy_entry = false
bool sell_entry = false
if ta.crossover(c[1], c[3])
buy_entry := true
alert('BUY')
if ta.crossunder(c[1], c[3])
sell_entry := true
alert('SELL')
plotchar(buy_entry, title='BUY', char='B', location=location.belowbar, color=color.green, offset=-1)
plotchar(sell_entry, title='SELL', char='S', location=location.abovebar, color=color.red, offset=-1)
It's important to note why I have changed to the indices to 1 and 3 with an offset of -1 in the plotchar function. This will give the exact same signals as 0 and 2 with no offset.
The difference is that you will only see the character print on the chart when the candle actually closes rather than watch it flicker on and off the chart as the close price of the incomplete candle moves.

setTargetReturn in fPortfolio package

I have a three asset portfolio. I need to set the target return for my second asset
whenever i try i get this error
asset.ts <- as.timeSeries(asset.ret)
spec <- portfolioSpec()
setSolver(spec) <- "solveRshortExact"
constraints <- c("Short")
setTargetReturn(Spec) = mean(colMeans(asset.ts[,2]))
efficientPortfolio(asset.ts, spec, constraints)
Error: is.numeric(targetReturn) is not TRUE
Title:
MV Efficient Portfolio
Estimator: covEstimator
Solver: solveRquadprog
Optimize: minRisk
Constraints: Short
Portfolio Weights:
MSFT AAPL NORD
0 0 0
Covariance Risk Budgets:
MSFT AAPL NORD
Target Return and Risks:
mean mu Cov Sigma CVaR VaR
0 0 0 0 0 0
Description:
Sat Apr 19 15:03:24 2014 by user: Usuario
i have tried and i have searched the web but i have no idea how to set the target return
for a specific expected return of the data set. i could copy the mean of my second asset # but i think due to decimal it could affect the answer.
I ran into this error , when using 2 assets.
Appears to be a bug in the PortOpt methods.
When there's 2 assets, it runs : .mvSolveTwoAssets
Which looks for the TargetReturn in the portfolioSpecs.
But as you know, targetReturn isn't always needed.
But in your code , you have 2 separate variables for spec.
'spec' , and 'Spec'
i.e.: 'Spec' .. assuming this is a typo, then this line needs to be changed.
setTargetReturn(Spec) = mean(colMeans(asset.ts[,2]))

how to calculate correlation with a sliding window?

I have a zoo object called aux with yearly data from 1961 to 2009:
x$nao x[, 2]
1961 -0.03 63.3
1962 0.20 155.9
1963 -2.98 211.0
I want to calculate the correlation between the two columns using a 20 years sliding window. I am trying to use rollapply, but I don't seem to be able to make it work. I tried several different ways of doing it but always without success...
> rollapply(aux,20, cor(aux[,1],aux[,2],method="pearson"))
Error in match.fun(FUN) : 'cor(aux[, 1], aux[, 2], method = "pearson")' is not a function, character or symbol
> rollapply(aux,20, cor,method="pearson")
Error in FUN(coredata(data)[posns], ...) : supply both 'x' and 'y' or a matrix-like 'x'
> rollapply(aux,20, cor)
Error in FUN(coredata(data)[posns], ...) : supply both 'x' and 'y' or a matrix-like 'x'
Can anybody tell me how to make rollapply work?
Thanks for helping!
Try this.
library(quantmod)
library(TTR)
#Set the seed so results can be duplicated
set.seed(123)
#Build a zoo object with typical price data
var1 <- zoo(cumprod(1+rnorm(50, 0.01, 0.05)), seq(1961, 2001, 1))
var2 <- zoo(cumprod(1+rnorm(50, 0.015, 0.1)), seq(1961, 2001, 1))
dat <- merge(var1=var1, var2=var2)
plot(dat)
grid()
#Calculate the percent returns for the two prices
del1 <- Delt(dat$var1)
del2 <- Delt(dat$var2)
dat <- merge(dat, del1=del1, del2=del2)
dimnames(dat)[[2]][3] <- "del1"
dimnames(dat)[[2]][4] <- "del2"
head(dat)
plot(dat)
#Calculate the correlation between the two returns using a 5 year sliding window
delcor <- runCor(dat$del1, dat$del2, n=5, use="all.obs", sample=TRUE, cumulative=FALSE)
dat <- merge(dat, delcor)
plot(dat$delcor, type="l", main="Sliding Window Correlation of Two Return Series", xlab="", col="red")
grid()

Resources