Editing table - stargazer - stargazer

I want to reproduce a similar table to the one I am attaching. I found some data frame online and am using them to practice.
I have 2 problems:
I can't get to remove the Dependent variable title and put Donations($) instead. I tried with dep.var.labels() and with column.labels=c() but it does not work.
I want to put the mean of the dependent variable in the bottom part. It seems like there is no Mean of the dep variable statistics code. I tried using keep.stat = c("rsq", "n", "all") but I only get r squared and observations, and not the rest of the statistics (my idea was to put "all" and then use omit.stat () to get rid of the ones that I do not need).
How do I insert these row labels?
Here is the code I used.
How can I fix this?
library(stargazer)
library(htmltools)
library(htmlTable)
## loading data frame
stargazer(attitude, type="text")
## the model
linear.1 <- lm(rating ~ complaints + privileges + learning + raises + critical,
data=attitude)
linear.2 <- lm(rating ~ complaints + privileges + learning, data=attitude)
## create an indicator dependent variable, and run a probit model
attitude$high.rating <- (attitude$rating > 70)
## table
star.out=stargazer(linear.1, linear.2,
align=TRUE,
covariate.labels=c("Handling of Complaints","No Special Privileges",
"Opportunity to Learn","Performance-Based Raises","Too Critical","Advancement"),
keep.stat = c("rsq","n"), no.space=TRUE, type="html")
Here is the table I want to replicate:

Related

Xpress Mosel Parallel Solving

Is it possible to have two submodles (.mos) running in parallel as input data for a master problem?
Is it correct to repeat twice the next code, one for each sub model? Because I have one bim file for each sub model, in my case 2 bim files.
declarations
A = 1..10
modPar: array(A) of Model
end-declarations
if compile("rtparams.mos")<>0 then exit(1)
end-if
forall(i in A) do
load(modPar(i), "rtparams.bim")
run(modPar(i), "PARAM1=" + i)
end-do
forall(i in A) do
wait
dropnextevent
end-model
Yes, that should work. However, it is probably better to not just drop the event but capture it (via getnextevent) and then check the type exit status/code to make sure the submodel did not stop with an error.
You have detailed examples for this in the user manual here.
Basically, what you do instead of dropnextevent is this:
event:=getnextevent
writeln("Exit status: ", getvalue(event))
writeln("Exit code : ", getexitcode(modPar(i)))
(and then of course handle errors).

AddField function does not work as expected

I have the following block of code that iterates through the fields of each table and adds the fields of the current table respectively in order to create a number of tableboxes.
'iterate through every table
For i=1 To arrTCount
'the arrFF array holds the names of the fields of each table
arrFF = Split(arrFields(i), ", ")
arrFFCount = UBound(arrFF)
'create a tablebox
Set TB = ActiveDocument.Sheets("Main").CreateTableBox
'iterate through the fields of the array
For j=0 to (arrFFCount - 1)
'add the field to the tablebox
TB.AddField arrFF(j)
'Msgbox(arrFF(j))
Next
Set tboxprop = TB.GetProperties
tboxprop.Layout.Frame.ObjectId = "TB" + CStr(i)
TB.SetProperties tboxprop
Next
The above code creates the tableboxes, but with one field less every time (the last one is missing). If I change the For loop from For j=0 To (arrFFCount - 1) to For j=0 To (arrFFCount) it creates empty tableboxes and seems to execute forever. Regarding this change, I tested the field names with the Msgbox(arrFF(j)) command and it shows me the correct field names as I want them to be in the tableboxes in the UI of QlikView.
Does anybody have an idea of what seems to be the problem here? Can I do this in a different way?
To clarify the situation here and what I have tested so far, I have 11 tables to make tableboxes of and I have tried with just one of them or some of them. The result I am seeing with the code is on the left and what I am expecting to see is on the right of the following image. Please note that the number of fields vary for each table and the image has just one of them as an example.

Highlighting mininimum row value in Pander

I am trying to display a dataframe in an RMarkdown document using the Pander package.
I would like to highlight the minimum value in each row of values. Here's what I have tried:
df <- replicate(4, rnorm(5))
df <- as.data.frame(df)
df$min <- apply(df, 1, min)
emphasize.strong.cells(which(df == df$min, arr.ind = T))
pander(df[1:4])
When I do this I get the error:
Error in check.highlight.parameters(emphasize.strong.cells, nrow(t), ncol(t)) :
Too high number passed for column indexes that should be kept below 6
I can print out the whole table (with the min column) without any trouble or I can print out a partial table without emphasis, but neither of these is ideal. I want the highlighting, but I do not wish to include the 'min' column.
I imagine the fact that I am leaving some highlighted cells out of the pander command is causing the error.
Is there a way around this? Or a better way to do this?
Thanks.
Subquestion: What if I wanted to highlight the minimum in the first few rows and the maximum in the next few. Is that possible in a single table?
Instead of the which lookup, with the possibility to match row minimums in the wrong rows, you can easily construct those array indices with a simple sequence (1:N) and calling which.min on each row, eg with apply:
> df <- replicate(4, rnorm(5))
> df <- as.data.frame(df)
> emphasize.strong.cells(cbind(1:nrow(df), apply(df, 1, which.min)))
> pander(df)
----------------------------------------------
V1 V2 V3 V4
----------- ----------- ----------- ----------
0.6802 0.1409 **-0.7992** 0.1997
0.6797 **-0.2212** 1.016 0.6874
2.031 -0.009855 0.3881 **-1.275**
1.376 0.2619 **-2.337** -0.1066
**-0.4541** 1.135 -0.1566 0.2912
----------------------------------------------
About your next question: you could of course do that in a single table, eg rbind two matrices created similarly as described above with which.min and which.max.

Creating a SPSS loop function

I need to create a syntax loop that runs a series of transformation
This is a simplified example of what I need to do
I would like to create five fruit variables
apple_variable
banana_variable
mango_variable
papaya_variable
orange_variable
in V1
apple=1
banana=2
mango=3
papaya=4
orange=5
First loop
IF (V1={number}) {fruit}_variable = VX.
IF (V2={number}) {fruit}_variable = VY.
IF (V3={number}) {fruit}_variable = VZ.
Run loop for next fruit
So what I would like is the scripte to check if V1, V2 or V3 contains the fruit number. If one of them does (only one can) The new {fruit}_variable should get the value from VX, VY or VZ.
Is this possible? The script need to create over 200 variables so a bit to time consuming to do manually
The first loop can be put within a DO REPEAT command. Essentially you define your two lists of variables and you can loop over the set of if statements.
DO REPEAT V# = V1 V2 V3
/VA = VX VY VZ.
if V# = 1 apple_variable = VA.
END REPEAT.
Now 1 and apple_variable are hard coded in the example above, but we can roll this up into a simple macro statement to take arbitrary parameters.
DEFINE !fruit (!POSITIONAL = !TOKENS(1)
/!POSITIONAL = !TOKENS(1)).
DO REPEAT V# = V1 V2 V3
/VA = VX VY VZ.
if V# = !1 !2 = VA.
END REPEAT.
!ENDDEFINE.
!fruit 1 apple_variable.
Now this will still be a bit tedious for over 200 variables, but should greatly simplify the task. After I get this far I typically just do text editing to my list to call the macro 200 times, which in this instance all that it would take is inserting !fruit before the number and the resulting variable name. This works well especially if the list is static.
Other approaches using the in-built SPSS facilities (mainly looping within the defined MACRO) IMO tend to be ugly, can greatly complicate the code and frequently not worth the time (although certainly doable). Although that is somewhat mitigated if you are willing to accept a solution utilizing python commands.
DO REPEAT is a good solution here, but I'm wondering what the ultimate goal is. This smells like a problem that might be solved by using the multiple response facilities in Statistics without the need to go through these transformations. Multiple response functionality is available in the old MULTIPLE RESPONSE procedure and in the newer CTABLES and Chart Builder facilities.
HTH,
Jon Peck
combination of loop statements: for,while, do while with nested if..else and switch case will do the trick. just make sure you have your initial value and final value for the loop to go
let's say:
for (initial; final; increment)
{
if (x == value) {
statements;
}else{
...
}

add columns to data frame using foreach and %dopar%

In Revolution R 2.12.2 on Windows 7 and Ubuntu 64-bit 11.04 I have a data frame with over 100K rows and over 100 columns, and I derive ~5 columns (sqrt, log, log10, etc) for each of the original columns and add them to the same data frame. Without parallelism using foreach and %do%, this works fine, but it's slow. When I try to parallelize it with foreach and %dopar%, it will not access the global environment (to prevent race conditions or something like that), so I cannot modify the data frame because the data frame object is 'not found.'
My question is how can I make this faster? In other words, how to parallelize either the columns or the transformations?
Simplified example:
require(foreach)
require(doSMP)
w <- startWorkers()
registerDoSMP(w)
transform_features <- function()
{
cols<-c(1,2,3,4) # in my real code I select certain columns (not all)
foreach(thiscol=cols, mydata) %dopar% {
name <- names(mydata)[thiscol]
print(paste('transforming variable ', name))
mydata[,paste(name, 'sqrt', sep='_')] <<- sqrt(mydata[,thiscol])
mydata[,paste(name, 'log', sep='_')] <<- log(mydata[,thiscol])
}
}
n<-10 # I often have 100K-1M rows
mydata <- data.frame(
a=runif(n,1,100),
b=runif(n,1,100),
c=runif(n,1,100),
d=runif(n,1,100)
)
ncol(mydata) # 4 columns
transform_features()
ncol(mydata) # if it works, there should be 8
Notice if you change %dopar% to %do% it works fine
Try the := operator in data.table to add the columns by reference. You'll need with=FALSE so you can put the call to paste on the LHS of :=.
See When should I use the := operator in data.table?
Might it be easier if you did something like
n<-10
mydata <- data.frame(
a=runif(n,1,100),
b=runif(n,1,100),
c=runif(n,1,100),
d=runif(n,1,100)
)
mydata_sqrt <- sqrt(mydata)
colnames(mydata_sqrt) <- paste(colnames(mydata), 'sqrt', sep='_')
mydata <- cbind(mydata, mydata_sqrt)
producing something like
> mydata
a b c d a_sqrt b_sqrt c_sqrt d_sqrt
1 29.344088 47.232144 57.218271 58.11698 5.417018 6.872565 7.564276 7.623449
2 5.037735 12.282458 3.767464 40.50163 2.244490 3.504634 1.940996 6.364089
3 80.452595 76.756839 62.128892 43.84214 8.969537 8.761098 7.882188 6.621340
4 39.250277 11.488680 38.625132 23.52483 6.265004 3.389496 6.214912 4.850240
5 11.459075 8.126104 29.048527 76.17067 3.385126 2.850632 5.389669 8.727581
6 26.729365 50.140679 49.705432 57.69455 5.170045 7.081008 7.050208 7.595693
7 42.533937 7.481240 59.977556 11.80717 6.521805 2.735186 7.744518 3.436157
8 41.673752 89.043099 68.839051 96.15577 6.455521 9.436265 8.296930 9.805905
9 59.122106 74.308573 69.883037 61.85404 7.689090 8.620242 8.359607 7.864734
10 24.191878 94.059012 46.804937 89.07993 4.918524 9.698403 6.841413 9.438217
There are two ways you can handle this:
Loop over each column (or, better yet, a subset of the columns) and apply the transformations to create a temporary data frame, return that, and then do cbind of the list of data frames, as #Henry suggested.
Loop over the transformations, apply each to the data frame, and then return the transformation data frames, cbind, and proceed.
Personally, the way I tend to do things like this is create a bigmatrix object (either in memory or on disk, using the bigmemory package), and you can access all of the columns in shared memory. Just pre-allocate the columns you will fill in, and you won't need to do a post hoc cbind. I tend to do it on disk. Just be sure to run flush(), to make sure everything is written to disk.

Resources