I have a problem with the output of r.cross. I hope you can follow my description without MWE:
I have 3 rasters I want to cross with the following characteristics:
GRASS 7.4.0 (Bengue):~ > r.stats soil_t,lcov,watermask -N
100%
4 8 0
4 8 1
4 9 0
[...]
I would expect r.cross to create a raster with a category for each line shown above. However, I get the following:
GRASS 7.4.0 (Bengue):~ > r.cross input=soil_t,lcov,watermask output=svc
GRASS 7.4.0 (Bengue):~ > r.category svc
0
1 category 4; category 8; category 1
2 category 4; category 9; category 0
[...]
Why is the first line just zero when one would rather expect something like: 1 category 4; category 8; category 0?
EDIT: Just noticed that under GRASS version 6.4 it runs as expected:
GRASS 6.4.6 (Bengue):~ > r.category svc
0
1 category 4; category 8; category 0
2 category 4; category 8; category 1
3 category 4; category 9; category 0
So, something must be wrong with the 7.4 version of r.cross?!
Thanks for your help!
System infos:
GRASS version 7.4.0
Ubuntu MATE 16.04 (xenial)
just in case somebody comes across this post: It was also asked in the mailing list shortly after this post by somebody else: https://lists.osgeo.org/pipermail/grass-user/2018-February/077934.html. As it seems, it is a bug and not yet fixed in the latest release version of GRASS.
Related
I´m just running the following example from GGEBiplotGUI package and of course, it works properly.
library(GGEBiplotGUI)
data("Ontario")
Ontario
GGEBiplot(Data = Ontario)
But when I download "Ontario" data and I want to run the above cited script on my PC. See the example below.
Ontario <- read.csv("Book.csv")
library(GGEBiplotGUI)
GGEBiplot(Data = Ontario)
The result is the following table (from column 0 to 10) taking numbers (From 1 to 17) as genotypes and "X" as another location.
See the result below please.
X BH93 EA93 HW93 ID93 KE93 NN93 OA93 RN93 WP93
1 ann 4.460 4.150 2.849 3.084 5.940 4.450 4.351 4.039 2.672
2 ari 4.417 4.771 2.912 3.506 5.699 5.152 4.956 4.386 2.938
3 aug 4.669 4.578 3.098 3.460 6.070 5.025 4.730 3.900 2.621
4 cas 4.732 4.745 3.375 3.904 6.224 5.340 4.226 4.893 3.451
5 del 4.390 4.603 3.511 3.848 5.773 5.421 5.147 4.098 2.832
6 dia 5.178 4.475 2.990 3.774 6.583 5.045 3.985 4.271 2.776
7 ena 3.375 4.175 2.741 3.157 5.342 4.267 4.162 4.063 2.032
8 fun 4.852 4.664 4.425 3.952 5.536 5.832 4.168 5.060 3.574
9 ham 5.038 4.741 3.508 3.437 5.960 4.859 4.977 4.514 2.859
10 har 5.195 4.662 3.596 3.759 5.937 5.345 3.895 4.450 3.300
11 kar 4.293 4.530 2.760 3.422 6.142 5.250 4.856 4.137 3.149
12 kat 3.151 3.040 2.388 2.350 4.229 4.257 3.384 4.071 2.103
13 luc 4.104 3.878 2.302 3.718 4.555 5.149 2.596 4.956 2.886
14 m12 3.340 3.854 2.419 2.783 4.629 5.090 3.281 3.918 2.561
15 reb 4.375 4.701 3.655 3.592 6.189 5.141 3.933 4.208 2.925
16 ron 4.940 4.698 2.950 3.898 6.063 5.326 4.302 4.299 3.031
17 rub 3.786 4.969 3.379 3.353 4.774 5.304 4.322 4.858 3.382
How can I fix this problem? I mean, in order to avoid "rownames" and "x" as a variables in the GGEBiplotGUI analysis.
I have also tried with these codes and they didn´t work:
attributes(Ontario)$row.names <- NULL
print(Ontario, row.names = F)
row.names(Ontario) <- NULL
Ontario[, -1] ## It deletes the first column not the 0 one.
Many thanks in advance!
This code worked properly.
Ontario <- read.csv("Libro.csv")
rownames(Ontario)<-Ontario$X
Ontario1<-Ontario[,-1]
library(GGEBiplotGUI)
GGEBiplot(Data = Ontario)
My dataset consists of a number of variables:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float(v1 v2) str11 Date float(v4 v5 v6 v7 v8)
1 2 "15-aug-2016" 1 1 1 1 1
1 2 "07-may-2015" 1 1 1 1 50
1 2 "07-may-2015" 1 1 1 1 88
1 2 "15-aug-2016" 1 1 1 1 29
end
The variable date is a date and time and is formatted as a datetime
generate double date = date(Date,"DMY")
My duplicates are the same for v1-v2-v4-v5-v6-v7 (as in the example), while v8 is different.
I need to delete duplicates based on v1-v2-v4-v5-v6-v7 and keep the one with the smallest date (here 07-may-2015).
I have tried without success:
1.
gsort -date
bysort v1 v2 v4 v5 v6 v7: generate dublet=_n
order dublet date
keep if dublet==1
drop dublet
--> Works for the first 25 rows or so, then keeps the wrong one a couple of times and then the right one again. (Seems to me, that the bysort command removes the sort done by gsort? Any knowing if that's correct?)
bysort v1 v2 v4 v5 v6 v7 (date) : keep if _n == _N
--> Obviously keeps the wrong one, since Date is not -Date.
However, -Date is not an option - Stata writes: - invalid name
You could change your second answer to bysort v1 v2 v4 v5 v6 v7 (date) : keep if _n == 1 and that should give you what you're looking for.
Since in your data example there are duplicate dates (2 observations are May 7th 2015) you will get a random one of the observations with the minimum date.
I want to perform model selection among ~150 fixed-effect and 7 random-effect variables, on a set of 360 observations. I decided to use the Lasso procedure for mixed models, with the glmmLasso. I did a lost of researches to find some examples of comparable models without success. Here is a sample of my data:
> str(RHI_12)
'data.frame': 350 obs. of 164 variables:
$ RHI_counts_12 : int 0 14 1 3 2 2 2 0 0 1 ...
$ Site : Factor w/ 6 levels "14_metzerlen",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Location : Factor w/ 30 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
$ Dist_roost : num 0.985 0.88 0.908 0.888 0.89 ...
$ Natural_light : num -0.194 -0.194 -0.194 -0.194 -0.194 ...
$ Mean_wind : num 0.836 0.836 0.836 0.836 0.836 ...
$ Mean_temp : num -0.427 -0.427 -0.427 -0.427 -0.427 ...
$ Day : num -0.993 -0.993 -0.993 -0.993 -0.993 ...
$ Artificial_light: num -0.2016 -0.2016 0.0772 -0.2016 -0.2016 ...
$ WBdi : num 1.14 1.14 1.14 1.14 1.14 ...
$ WCdi : num 1.49 1.49 1.49 1.49 1.47 ...
... (many more fixed-effect variables)
The response variable is counts (RHI_counts_12).
My question is about the structure of the random-effect variables in the model.
I have 2 categorical random-effect variables ("Site" and "Location"; "Location" is nested in "Site") and 5 numerical random-effect variables. I have structured my model like this (using only a sample of the fixed-effect variables):
lasso1<-glmmLasso(RHI_counts_12 ~ Artificial_light+WBdi+WCdi+BUdi+FOdi+TIdi, list(Site=~1,Location=~1+Dist_roost+Natural_light+Mean_wind+Mean_temp+Day),
lambda = 500,family = poisson(link = log), data = RHI_12)
I am not convinced at all about the right way to structure the random effects if I have these 2 categorical nested random effects. I want to have a model with Location nested in Site, and I do not think that this is what I get. Here is my output for the random effects(in this output, "Loc" stands for Location, "siteName" for Site):
Random Effects:
StdDev:
[[1]]
siteName
siteName 1.180514
[[2]]
Loc Loc:Dist_roost Loc:Natural_light Loc:Mean_wind
Loc 1.15105859 -0.66317669 -0.35354821 -0.10805268
Loc:Dist_roost -0.66317669 1.42601945 0.46004662 -0.42795987
Loc:Natural_light -0.35354821 0.46004662 0.49532786 -0.15485395
Loc:Mean_wind -0.10805268 -0.42795987 -0.15485395 0.76175417
Loc:Mean_temp 0.02677276 0.03961902 -0.01431360 -0.03649499
Loc:Day 0.03756960 -0.02081360 0.02520654 -0.12082652
Loc:Mean_temp Loc:Day
Loc 0.02677276 0.03756960
Loc:Dist_roost 0.03961902 -0.02081360
Loc:Natural_light -0.01431360 0.02520654
Loc:Mean_wind -0.03649499 -0.12082652
Loc:Mean_temp 0.36923939 -0.08311209
Loc:Day -0.08311209 0.56876662
Do you think that it is right? I was not able to build this model with "Location" nested in "Site" (and all the other random factors would also be nested in "Site".) I have tried many different ways without success.
I already thank you a lot for having read me and for any advices for the structure of random effects in glmmLasso! :-)
Thomas
Hi I have two tables in DB.The first table is given below.
Table name-
t_hcsy_details
class name in model-
class THcsyDetails < ActiveRecord::Base
end
The values in side table is given below.
HCSY_Details_ID HCSY_ID HCSY_Fund_Type_ID Amount
1 2 1 1125
2 2 2 390
3 2 3 285
4 2 4 100
5 2 5 60
6 2 6 40
My second table is given below.
Table Name:
t_hcsy_fund_type_master
class in model:
class THcsyFundTypeMaster < ActiveRecord::Base
end
Table values are given below.
HCSY_Fund_Type_ID Fund_Type_Code Fund_Type_Name Amount
1 1 woods 1125
2 2 Burning 390
3 3 goods 285
4 4 brahmin 100
5 5 swd 60
6 6 Photo 40
I know only HCSY_ID value(i.e-2) of first table.But i need Fund_Type_Name and Amount from second table.As you can see one HCSY_ID has 6 different records.But i need all Fund_Type_Name and Amount of one HCSY_ID. Please help me to resolve this issue by creating object for both two classes shown above.Please help me.
You haven't specified any relationships setup, so it would be easier to split this in two queries:
# you already have hcsy_id
fund_type_ids = THcsyDetails.where(hcsy_id: hcsy_id).pluck(:hcsy_fund_type_id)
fund_types = THcsyFundTypeMaster.where(id: fund_type_ids)
fund_types.group(:fund_type_name).sum(:amount)
In case you had proper relationships setup, the above would've simplified to:
THcsyDetails.
joins(association_name). # THcsyFundTypeMaster
where(hcsy_id: hcsy_id).
group("#{t = THcsyFundTypeMaster.table_name}.fund_type_name").
sum("#{t}.amount")
I am working on a Hierarchical panel data using WinBugs. Assuming a data on school performance - logs with independent variable logp & rank. All schools are divided into three categories (cat) and I need beta coefficient for each category (thus HLM). I am wanting to account for time-specific and school specific effects in the model. One way can be to have dummy variables in the list of variables under mu[i] but that would get messy because my number of schools run upto 60. I am sure there must be a better way to handle that.
My data looks like the following:
school time logs logp cat rank
1 1 4.2 8.9 1 1
1 2 4.2 8.1 1 2
1 3 3.5 9.2 1 1
2 1 4.1 7.5 1 2
2 2 4.5 6.5 1 2
3 1 5.1 6.6 2 4
3 2 6.2 6.8 3 7
#logs = log(score)
#logp = log(average hours of inputs)
#rank - rank of school
#cat = section red, section blue, section white in school (hierarchies)
My WinBUGS code is given below.
model {
# N observations
for (i in 1:n){
logs[i] ~ dnorm(mu[i], tau)
mu[i] <- bcons +bprice*(logp[i])
+ brank[cat[i]]*(rank[i])
}
}
}
# C categories
for (c in 1:C) {
brank[c] ~ dnorm(beta, taub)}
# priors
bcons ~ dnorm(0,1.0E-6)
bprice ~ dnorm(0,1.0E-6)
bad ~ dnorm(0,1.0E-6)
beta ~ dnorm(0,1.0E-6)
tau ~ dgamma(0.001,0.001)
taub ~dgamma(0.001,0.001)
}
As you can see in the data sample above, I have multiple observations for school over time. How can I modify the code to account for time and school specific fixed effects. I have used STATA in the past and we get fe,be,i.time options to take care of fixed effects in a panel data. But here I am lost.