I'm quite new to R, just been using it for the past month or so for my master's thesis. I finally (hopefully) have figured out my model and now i'm trying to present it in a table. After some research, everyone seems to really love the gtsummary package and I really want to learn to use it. But for some reason, the most basic function is not working for me.
Here's my lmer model:
res1 <- lmer(pH ~ Species * SedLayer + (+1|Site), data = data
summary(res1)
Anores1 <- Anova(res1)
Anores1
Which gives me quite a lot of output, output data frame. And here's the basic code i'm trying to use:
lmer(pH ~ SedLayer * Species + (+1|Site), data = dat)
#> Error in lmer(pH ~ SedLayer * Species + (+1 | Site), data = dat): could not find function "lmer"
Created on 2021-05-16 by the reprex package (v2.0.0)
And I keep getting the following error message:
Error in match.call() : ... used in a situation where it does not exist
I've got all the following packages installed and loaded:
library(stats)
library(rstatix)
library(car)
library(lme4)
library(lmerTest)
library("gtsummary")
library("gt")
library("broom.mixed")
library(tidyverse)
I'm sure I must be making a very silly mistake, but I cannot figure it out! Is my model somehow too weird to put in a table? Is there an additional package I should load? Or am I just writing it wrong?
Thanks for any help you can give me!
Related
I got this error, when compiling my pipeline:
type name google.VertexModel is different from expected: Model
when running the following notebook by google: automl_tabular_classification_beans
I suppose that kubeflow v2 is not able to handle (yet) google.vertexmodel as type for component input. However, I've been browsing a bit and did not find any good clue, or refs (kfp documentation for v2 is not up to date..) to solve this issue. Hopefully someone here can give me a good pointer? I look forward to all of your ideas.
Cheers
Google.Vertex is defined here:
https://github.com/kubeflow/pipelines/blob/286a49547cce763c502592c822296aa60f50b3e8/components/google-cloud/google_cloud_pipeline_components/types/artifact_types.py#L20
Here is an example on how to define it:
https://github.com/kubeflow/pipelines/blob/286a49547cce763c502592c822296aa60f50b3e8/components/google-cloud/tests/types/artifact_types_test.py#L22
For example,
from google_cloud_pipeline_components.types import artifact_types
model = artifact_types.VertexModel(uri='YOUR_MODEL_URI_STRING')
Can you try specifying your model using the syntax above and let us know if this works for your code?
This was a breaking change with release 0.1.9. Here there are some recommendation:
Pin your release to 0.1.7 and continue to use the Model type.
Use 0.1.9 and switch the output from Output[Model] to Output[Artifact].
Try 0.2.0 release, documentation here.
Hope these suggestions work!
I have trained my own model, using my own custom dataset, using Yolov4, and I have downloaded the .cfg, .weights and .data files.
When I try to run my model using:
darknet.exe detector test cfg/obj.data cfg/yolov4-og.cfg custom-yolov4-detector_best.weights
I get the error:
Error: l.outputs == params.inputs filters= in the [convolutional]-layer doesn't correspond to classes= or mask= in [yolo]-layer
I don't know if this is an error on my part, with the command I am running, or an error from the model I trained.
Any help would be appreciated.
I am assuming you are using the main darknet repo AlexeyAB. Please make sure you follow the following instructions:
Make sure you assign the correct classes number in the config file.
Change filters=255 to filters=(classes + 5)x3 in the 3
[convolutional] before each [yolo] layer, keep in mind that it only
has to be the last [convolutional] before each of the [yolo] layers
https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L603
https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L689
https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L776
So if classes=1 then should be filters=18. If classes=2 then write filters=21
(Generally filters depends on the classes, coords and number of masks, i.e. filters=(classes + coords + 1)*, where mask is indices of anchors. If mask is absent, then filters=(classes + coords + 1)*num)
Reference: https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects
I am getting this error when compiling:
Error in FUN(X[[i]], ...): trying to get slot "presence" from an object of a basic class ("NULL") with no slots
How can I solve this?
You should check the formula to see whether you are indicating the same name
ie
sdmdata<- sdmData(**species~.,** train, test, predictors, bg..)
Writing this in the model will give you an error you described.
sdmmodel<- sdm(**specie~.,** data= methods=c("glm", "brt")).
I solved a similar problem that way
My students are using the Mosaic package for RStudio, and one of them is not able to use the function plotFun. Every time she tries to use it, she gets the same error. For example,
> plotFun(x+2~x)
Error in as.data.frame.default(x[[i]], optional = TRUE) :
cannot coerce class ‘"formula"’ to a data.frame
Are there any thoughts as to what is going wrong? She will need to use this function often for this class; is she missing a package or update of R or RStudio?
I'm trying to export a sparkR model as PMML.
The first approach was using the pmml library:
library(pmml)
sparkR.session()
data(iris)
df <- createDataFrame(iris)
model <- spark.kmeans(df, Sepal_Length ~ Sepal_Width, k = 4, initMode = "random")
model_pmml <- pmml(model)
The error:
Error in UseMethod("pmml"): no applicable method for 'pmml' applied to an object of class "KMeansModel"
Traceback:
1. pmml(model)
I also investigated if the toPMML method available on scala models could be used from SparkR. I've found a question that suggests it may be possible with Sparklyr, but not with SparkR.
Any ideas?
I have come to the conclusion that exporting a spark R model is not supported. I have added a feature request for this: https://issues.apache.org/jira/browse/SPARK-21430. Please vote on the jira ticket if you are also looking for this functionality.