How to import angle values from a catalog into GalSim - yaml

I'm trying to import galaxy values (Sersic index, half light radius, etc) from an external ascii file in to GalSim. I'm having trouble reading in the position angle value 'beta', and would like to know if this is possible using the YAML format.
When I try I get the error message:
galsim.errors.GalSimConfigValueError: Invalid value_type specified for parameter beta with type=Catalog. Value <class 'coord.angle.Angle'> not in (<class 'float'>, <class 'int'>, <class 'bool'>, <class 'str'>)
I realise that I'm getting this error message because I'm unable to append the string 'deg' after the input to specify that the units of this input are in degrees.
I've tried adding 'deg' directly in to the input catalogue (inside "" speech marks), with no success. I've also tried adding 'deg' after the catalogue read statement directly in the code, also to no success.
A minimum working example is below. This relies on a file named 'input.dat' in the same directory with a single number inside (45, for example). Then, save the code below as 'test.yaml' and run on the command line as $ galsim test.yaml:
gal :
type : Sersic
n : 1
half_light_radius : 1
flux : 1
ellip :
type : QBeta
q : 0.5
beta : { type : Catalog , col : 0 }
input :
catalog :
file_name : input.dat
I expect to be able to read in beta position angle arguments from an input ascii catalogue and have them replicated in the output galaxy profiles. The above MWE should produce a small postage stamp image of a moderately elliptical galaxy at a position angle of 45 degrees (or whatever number is placed inside 'input.dat'.
Thank you in advance for any help or advice on this front.

Try this:
gal :
type : Sersic
n : 1
half_light_radius : 1
flux : 1
ellip :
type : QBeta
q : 0.5
beta :
type: Radians
theta: { type : Catalog , col : 0 }
input :
catalog :
file_name : input.dat
There is also a Degrees type that works the same way if your catalog columns list the angle in degrees.

Related

Display images on Tensorboard to have the input, the ground truth and the prediction side by side

I'm working on a deep learning model and I would like to be able to display images on Tensorboard to have the input, the ground truth and the prediction side by side.
Currently, the display look like this :
current display
But this visualization is not convenient, because it's not easy to compare the ground truth with the prediction if images are not side by side, and we have to scroll to pass from the ground truth to the prediction (because images are too big and we display more than 6 images).
The current code :
for epoch in range(EPOCHS):
for step, (x_train, y_train) in enumerate(train_ds):
y_, gloss, dloss = pix2pix.train_step(x_train, y_train, epoch)
if step%PRINT_STEP == 0:
template = 'Epoch {} {}%, G-Loss: {}, D-Loss: {}'
print (template.format(epoch+1,int(100*step/max_steps),gloss, dloss))
with train_writer.as_default():
tf.summary.image('GT', y_train+0.5, step=epoch*max_steps+step, max_outputs=3, description=None)
tf.summary.image('pred', y_+0.5, step=epoch*max_steps+step, max_outputs=3, description=None)
tf.summary.image('input', x_train+0.5, step=epoch*max_steps+step, max_outputs=3, description=None)
tf.summary.scalar('generator loss', gloss, step = epoch*max_steps+step)
tf.summary.scalar('discriminator loss', dloss, step = epoch*max_steps+step)
tf.summary.flush()
So here is an example that what I would like to have :
desired display
I thought about an other solution : save all triples images(input/truth/pred) in local folders (folders 1 : input 1 /truth 1 /pred 1, folders 2 : input 2 /truth 2 /pred 2 ...) and display them with a python library (cv2, matplotlib ...) but same problem, I don't know how to do that if it's possible.
Thanks for your help

Robust Standard Errors in lm() using stargazer()

I have read a lot about the pain of replicate the easy robust option from STATA to R to use robust standard errors. I replicated following approaches: StackExchange and Economic Theory Blog. They work but the problem I face is, if I want to print my results using the stargazer function (this prints the .tex code for Latex files).
Here is the illustration to my problem:
reg1 <-lm(rev~id + source + listed + country , data=data2_rev)
stargazer(reg1)
This prints the R output as .tex code (non-robust SE) If i want to use robust SE, i can do it with the sandwich package as follow:
vcov <- vcovHC(reg1, "HC1")
if I now use stargazer(vcov) only the output of the vcovHC function is printed and not the regression output itself.
With the package lmtest() it is possible to print at least the estimator, but not the observations, R2, adj. R2, Residual, Residual St.Error and the F-Statistics.
lmtest::coeftest(reg1, vcov. = sandwich::vcovHC(reg1, type = 'HC1'))
This gives the following output:
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.54923 6.85521 -0.3719 0.710611
id 0.39634 0.12376 3.2026 0.001722 **
source 1.48164 4.20183 0.3526 0.724960
country -4.00398 4.00256 -1.0004 0.319041
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
How can I add or get an output with the following parameters as well?
Residual standard error: 17.43 on 127 degrees of freedom
Multiple R-squared: 0.09676, Adjusted R-squared: 0.07543
F-statistic: 4.535 on 3 and 127 DF, p-value: 0.00469
Did anybody face the same problem and can help me out?
How can I use robust standard errors in the lm function and apply the stargazer function?
You already calculated robust standard errors, and there's an easy way to include it in the stargazeroutput:
library("sandwich")
library("plm")
library("stargazer")
data("Produc", package = "plm")
# Regression
model <- plm(log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp,
data = Produc,
index = c("state","year"),
method="pooling")
# Adjust standard errors
cov1 <- vcovHC(model, type = "HC1")
robust_se <- sqrt(diag(cov1))
# Stargazer output (with and without RSE)
stargazer(model, model, type = "text",
se = list(NULL, robust_se))
Solution found here: https://www.jakeruss.com/cheatsheets/stargazer/#robust-standard-errors-replicating-statas-robust-option
Update I'm not so much into F-Tests. People are discussing those issues, e.g. https://stats.stackexchange.com/questions/93787/f-test-formula-under-robust-standard-error
When you follow http://www3.grips.ac.jp/~yamanota/Lecture_Note_9_Heteroskedasticity
"A heteroskedasticity-robust t statistic can be obtained by dividing an OSL estimator by its robust standard error (for zero null hypotheses). The usual F-statistic, however, is invalid. Instead, we need to use the heteroskedasticity-robust Wald statistic."
and use a Wald statistic here?
This is a fairly simple solution using coeftest:
reg1 <-lm(rev~id + source + listed + country , data=data2_rev)
cl_robust <- coeftest(reg1, vcov = vcovCL, type = "HC1", cluster = ~
country)
se_robust <- cl_robust[, 2]
stargazer(reg1, reg1, cl_robust, se = list(NULL, se_robust, NULL))
Note that I only included cl_robust in the output as a verification that the results are identical.

Drools - Find minimum Java 8 Local Date

I am trying to find the minimum date from list of dates (Java 8) using accumulate function in Drools.
This is my rule:
rule "Print minimum Service Date from Bill Lines"
when
accumulate (
$lineItem : LineLevelData($dateOfService : dateOfService) ,
$epochDay : min($dateOfService.toEpochDay())
)
$minServiceDate : LocalDate() from LocalDate.ofEpochDay($epochDay)
then
System.err.println("Min. Service Date used in rules calculation : " + $minServiceDate);
end
This is the exception I get:
Unable to Analyse Expression LocalDate.ofEpochDay($epochDay):
[Error: unable to resolve method using strict-mode: java.time.LocalDate.ofEpochDay(java.lang.Comparable)]
[Near : {... LocalDate.ofEpochDay($epochDay) ....}]
^ : [Rule name='Print minimum Service Date from Bill Lines']
Obviously, I am missing some basics here. Can somebody help me to fix this one?
Drools version: 7.5.0
POJO:
public class LineLevelData{
LocalDate dateofService;
}
Update:
rule "Print minimum Service Date from Bill Lines"
when
accumulate ( $lineItem : LineLevelData ( $dateOfService : dateOfService ) ,
$epochDay : min($dateOfService.toEpochDay()) )
$epochLong : Number (longValue > 0 ) from $epochDay
$minServiceDate : LocalDate( ) from LocalDate.ofEpochDay($epochLong)
then
System.err.println("Min. Service Date used in rules calculation : " + $minServiceDate);
end
After adding the cast to Long, epoch is converted to Local date. Adding this in case if it helps someone looking for this.
I guess the problem is that Drools is not preserving the type of the min function is returning and it is treating it as a Comparable instead of as a long.
There are 3 ways you can solve this problem:
Create your own accumulate function to deal with LocalDate
Use an accumulate with inline code
Use the workaround bellow to force Drools to cast the Comparable back to LocalDate.
Workaround:
rule "Print minimum Service Date from Bill Lines"
when
$c: Comparable() from accumulate (
LineLevelData($dateOfService : dateOfService) ,
min($dateOfService)
)
$minServiceDate: LocalDate() from $c
then
System.err.println("Min. Service Date used in rules calculation : " +
$minServiceDate);
end
Hope it helps,

How to extract the values for a string?

I have this type of data :
--Line1 : val1=10; val2=20; val3=30
--Line2 : val1=11; val2=21; val3=31
--Line3 : val1=12; val2=22; val3=32
--Line4 : val1=13; val2=23; val3=33
--Line5 : val1=14; val2=24; val3=34
--Line6 : val1=15; val2=25; val3=35
--Line7 : val1=16; val2=26; val3=30
Now, i am trying to write a script to get any particular value (say val1 for Line4) on the basis of string "Line1", Line2, etc.
Any hint? Working in linux.

volemont/insights:chart.EquityCurve.R: a bug in graphing peaks of cumulative return?

I came cross a function of graphing cumulative return of a strategy and the peaks of the return in a great example of combining shiny and quantstrat, thanks to Simon Otziger. The source code is here. The code works fine most of time, but for some data it won't graph the peaks properly.
The code is simplified but the key logic is not changed. I ran the code with three set of data (cumPNL1, cumPNL2, cumPNL3) copied from three example strategies, in which the first data will cause the code to fail to graph peaks properly.
I ran the following codes with cumPNL1, cumPNL2, cumPNL3 separately. with both cumPNL2 and cumPNL3 the code can produce cumulative return line and peak points successfully. however, with cumPNL1 the code can only produce line, but peaks are not at the right positions.
I noticed that both peakIndex based on cumPNL2 and cumPNL3 have their first value being TRUE, so when I change the code by adding a line peakIndex[1] <- TRUE, cumPNL1 will work fine with the modified code.
Though now it works with modified code, I have no idea why it is behaving like this. Could anyone have a look? Thanks
cumPNL1 <- c(-193,-345,-406,-472,-562,-543,-450,-460,-544,-659,-581,-342,-384,276,-858,-257.99)
cumPNL2 <- c(35.64,4.95,-2.97,-6.93,11.88,-19.8,-26.73,-39.6,-49.5,-50.49,-51.48,-48.51,-50.49,-55.44,143.55,770.22,745.47,691.02,847.44,1141.47,1007.82,1392.93,1855.26,1863.18,2536.38,2778.93,2811.6,2859.12,2417.58)
cumPNL3 <- c(35.64,4.95,-2.97,-6.93,11.88,-19.8,-26.73,-39.6,-49.5,-50.49,-51.48,-48.51,-50.49,-55.44,143.55,770.22,745.47,691.02,847.44,1141.47,1007.82,1392.93,1855.26,1863.18,2536.38,2778.93,2811.6,2859.12,2417.58)
peakIndex <- c(cumPNL3[1] > 0, diff(cummax(cumPNL3)) > 0)
# peakIndex[1] <- TRUE
dev.new()
plot(cumPNL3, type='n', xlab="index of trades", ylab="returns in cash", main="cumulative returns and peaks")
grid()
lines(cumPNL3)
points(cbind(1 : length(cumPNL3), cumPNL3)[peakIndex, ],
pch=19, col='green', cex=0.6)
legend(
x='bottomright', inset=0.1,
legend=c('Net Profit','Peaks'),
lty=c(1, NA), pch=c(NA, 19),
col=c('black','green')
)
cumPNL1 has a single peak and R reduces the dimension from a numerical matrix to a numerical vector of length 2. The points function plots the two numerical vector values on the y-axis using the x-axis index 1 and 2:
peakIndex1 <- c(cumPNL1[1] > 0, diff(cummax(cumPNL1)) > 0)
peakIndex3 <- c(cumPNL3[1] > 0, diff(cummax(cumPNL3)) > 0)
str(cbind(1 : length(cumPNL1), cumPNL1)[peakIndex1,])
str(cbind(1 : length(cumPNL3), cumPNL3)[peakIndex3,])
Output:
> str(cbind(1 : length(cumPNL1), cumPNL1)[peakIndex1,])
num [1:12, 1:2] 1 15 16 19 20 22 23 24 25 26 ...
- attr(*, "dimnames")=List of 2
..$ : NULL
..$ : chr [1:2] "" "cumPNL1"
> str(cbind(1 : length(cumPNL3), cumPNL3)[peakIndex3,])
Named num [1:2] 14 276
- attr(*, "names")= chr [1:2] "" "cumPNL3"
Usually setting plot = FALSE preserves the object, e.g., str(cbind(1 : length(cumPNL3), cumPNL3)[peakIndex3, drop = FALSE]), which somehow does not work in this case. However, changing the points line to the following fixes the problem:
points(seq_along(cumPNL3)[peakIndex], cumPNL3[peakIndex], pch = 19,
col = 'green', cex = 0.6)
Thanks for reporting the issue. I will push the fix to GitHub tomorrow.

Resources