Rscript : Why is Error in UseMethod("extract_") : being indicated when attempting to use raster::extract? - rstudio

I attempting to use raster package's extract method to extract values from a Raster* object.
RStudioPrompt> jpnpe <- extract(jpnp, jpnb, fun = mean, na.rm = T)
where jpnp is the raster object and jpnb is SpatialPolygonsDataFrame
However the following error is indicated:
Error in UseMethod("extract_") :
no applicable method for 'extract_' applied to an object of class "c('RasterStack', 'Raster', 'RasterStackBrick', 'BasicRaster')"
How can I get passed this error?

Issue may be due to having another package with the same method name, obfuscating the raster extract method.
The tidyr package has an extract method which may conflict with raster's extract method.
Confirm by checking libraries loaded by doing:
>search()
[1] ".GlobalEnv" **"package:tidyr"** "package:dplyr"
[4] "package:rgeos" "package:ggplot2" "package:RColorBrewer"
[7] "package:animation" "package:rgdal" "package:maptools"
[10] **"package:raster"** "package:sp" "tools:rstudio"
[13] "package:stats" "package:graphics" "package:grDevices"
[16] "package:utils" "package:datasets" "package:methods"
[19] "Autoloads" "package:base"
you can also check which extract method is being loaded by typing name of function without brackets (as below, the environment will tell you which package is being used):
> extract
function (data, col, into, regex = "([[:alnum:]]+)", remove = TRUE,
convert = FALSE, ...)
{
col <- col_name(substitute(col))
extract_(data, col, into, regex = regex, remove = remove,
convert = convert, ...)
}
<environment: namespace:tidyr>
To resolve the error just unload the offending package, in RStudio you can use the following command:
>.rs.unloadPackage("tidyr")
and re-execute the raster extract method:
>jpnpe <- extract(jpnp, jpnb, fun = mean, na.rm = T)

Related

Jmeter BeanShellSampler error: Error invoking bsh method: eval import org.apache.commons.io.FileUtils

I use BeanShell code loading 100s of sql files in jmeter:
import org.apache.commons.io.FileUtils;
File folder = new File("D:\\sql99");
File[] sqlFiles = folder.listFiles();
for (int i = 0; i < sqlFiles.length; i++) {
File sqlFile = sqlFiles[i];
if (sqlFile.isFile()) {
vars.put("query_" + i,sqlFile.getName(),
FileUtils.readFileToString(sqlFiles[i]));
}
}
but get error info :
17:42:03,301 ERROR o.a.j.u.BeanShellInterpreter: Error invoking bsh method: eval Sourced file: inline evaluation of: ``import org.apache.commons.io.FileUtils; File folder = new File("D:\sql99"); Fi . . . '' : Error in method invocation: Method put( java.lang.String, java.lang.String, java.lang.String ) not found in class'org.apache.jmeter.threads.JMeterVariables'
I want to get each sql execute time in jmeter results tree. How to fix code?
Thanks!
You're trying to call JMeterVariables.put() function which accepts 2 Strings as the parameters passing 3 Strings
The correct syntax is vars.put("variable-name", "variable-value"); so you need to decide how to amend this line:
vars.put("query_" + i, sqlFile.getName(), FileUtils.readFileToString(sqlFiles[i]));
so it would contain only 2 parameters instead of 3.
Also since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting mainly for performance reasons so it might be a good option for switching (the same code will work in Groovy without changes assuming you fix the issue with vars.put() function call)

Validate Python TypedDict at runtime

I'm working in a Python 3.8+ Django/Rest-Framework environment enforcing types in new code but built on a lot of untyped legacy code and data. We are using TypedDicts extensively for ensuring that data we are generating passes to our TypeScript front-end with the proper data type.
MyPy/PyCharm/etc. does a great job of checking that our new code spits out data that conforms, but we want to test that the output of our many RestSerializers/ModelSerializers fits the TypeDict. If I have a serializer and typed dict like:
class PersonSerializer(ModelSerializer):
class Meta:
model = Person
fields = ['first', 'last']
class PersonData(TypedDict):
first: str
last: str
email: str
and then run code like:
person_dict: PersonData = PersonSerializer(Person.objects.first()).data
Static type checkers don't be able to figure out that person_dict is missing the required email key, because (by design of PEP-589) it is just a normal dict. But I can write something like:
annotations = PersonData.__annotations__
for k in annotations:
assert k in person_dict # or something more complex.
assert isinstance(person_dict[k], annotations[k])
and it will find that email is missing from the data of the serializer. This is well and good in this case, where I don't have any changes introduced by from __future__ import annotations (not sure if this would break it), and all my type annotations are bare types. But if PersonData were defined like:
class PersonData(TypedDict):
email: Optional[str]
affiliations: Union[List[str], Dict[int, str]]
then isinstance is not good enough to check if the data passes (since "Subscripted generics cannot be used with class and instance checks").
What I'm wondering is if there already exists a callable function/method (in mypy or another checker) that would allow me to validate a TypedDict (or even a single variable, since I can iterate a dict myself) against an annotation and see if it validates?
I'm not concerned about speed, etc., since the point of this is to check all our data/methods/functions once and then remove the checks later once we're happy that our current data validates.
The simplest solution I found works using pydantic.
from typing import cast, TypedDict
import pydantic
class SomeDict(TypedDict):
val: int
name: str
# this could be a valid/invalid declaration
obj: SomeDict = {
'val': 12,
'name': 'John',
}
# validate with pydantic
try:
obj = cast(SomeDict, pydantic.create_model_from_typeddict(SomeDict)(**obj).dict())
except pydantic.ValidationError as exc:
print(f"ERROR: Invalid schema: {exc}")
EDIT: When type checking this, it currently returns an error, but works as expected. See here: https://github.com/samuelcolvin/pydantic/issues/3008
You may want to have a look at https://pypi.org/project/strongtyping/. This may help.
In the docs you can find this example:
from typing import List, TypedDict
from strongtyping.strong_typing import match_class_typing
#match_class_typing
class SalesSummary(TypedDict):
sales: int
country: str
product_codes: List[str]
# works like expected
SalesSummary({"sales": 10, "country": "Foo", "product_codes": ["1", "2", "3"]})
# will raise a TypeMisMatch
SalesSummary({"sales": "Foo", "country": 10, "product_codes": [1, 2, 3]})
A little bit of a hack, but you can check two types using mypy command line -c options. Just wrap it in a python function:
import subprocess
def is_assignable(type_to, type_from) -> bool:
"""
Returns true if `type_from` can be assigned to `type_to`,
e. g. type_to := type_from
Example:
>>> is_assignable(bool, str)
False
>>> from typing import *
>>> is_assignable(Union[List[str], Dict[int, str]], List[str])
True
"""
code = "\n".join((
f"import typing",
f"type_to: {type_to}",
f"type_from: {type_from}",
f"type_to = type_from",
))
return subprocess.call(("mypy", "-c", code)) == 0
You could do something like this:
def validate(typ: Any, instance: Any) -> bool:
for property_name, property_type in typ.__annotations__.items():
value = instance.get(property_name, None)
if value is None:
# Check for missing keys
print(f"Missing key: {property_name}")
return False
elif property_type not in (int, float, bool, str):
# check if property_type is object (e.g. not a primitive)
result = validate(property_type, value)
if result is False:
return False
elif not isinstance(value, property_type):
# Check for type equality
print(f"Wrong type: {property_name}. Expected {property_type}, got {type(value)}")
return False
return True
And then test some object, e.g. one that was passed to your REST endpoint:
class MySubModel(TypedDict):
subfield: bool
class MyModel(TypedDict):
first: str
last: str
email: str
sub: MySubModel
m = {
'email': 'JohnDoeAtDoeishDotCom',
'first': 'John'
}
assert validate(MyModel, m) is False
This one prints the first error and returns bool, you could change that to exceptions, possibly with all the missing keys. You could also extend it to fail on additional keys than defined by the model.
I like your solution!. In order to avoid iteration fixes for some user, I added some code to your solution :D
def validate_custom_typed_dict(instance: Any, custom_typed_dict:TypedDict) -> bool|Exception:
key_errors = []
type_errors = []
for property_name, type_ in my_typed_dict.__annotations__.items():
value = instance.get(property_name, None)
if value is None:
# Check for missing keys
key_errors.append(f"\t- Missing property: '{property_name}' \n")
elif type_ not in (int, float, bool, str):
# check if type is object (e.g. not a primitive)
result = validate_custom_typed_dict(type_, value)
if result is False:
type_errors.append(f"\t- '{property_name}' expected {type_}, got {type(value)}\n")
elif not isinstance(value, type_):
# Check for type equality
type_errors.append(f"\t- '{property_name}' expected {type_}, got {type(value)}\n")
if len(key_errors) > 0 or len(type_errors) > 0:
error_message = f'\n{"".join(key_errors)}{"".join(type_errors)}'
raise Exception(error_message)
return True
some console output:
Exception:
- Missing property: 'Combined_cycle'
- Missing property: 'Solar_PV'
- Missing property: 'Hydro'
- 'timestamp' expected <class 'str'>, got <class 'int'>
- 'Diesel_engines' expected <class 'float'>, got <class 'int'>

How to access JSON from external data source in Terraform?

I am receiving JSON from a http terraform data source
data "http" "example" {
url = "${var.cloudwatch_endpoint}/api/v0/components"
# Optional request headers
request_headers {
"Accept" = "application/json"
"X-Api-Key" = "${var.api_key}"
}
}
It outputs the following.
http = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
which is a string in terraform. In order to convert this string into JSON I pass it to an external data source which is a simple ruby function. Here is the terraform to pass it.
data "external" "component_ids" {
program = ["ruby", "./fetchComponent.rb",]
query = {
data = "${data.http.example.body}"
}
}
Here is the ruby function
#!/usr/bin/env ruby
require 'json'
data = JSON.parse(STDIN.read)
results = data.to_json
STDOUT.write results
All of this works. The external data outputs the following (It appears the same as the http output) but according to terraform docs this should be a map
external1 = {
data = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
}
I was expecting that I could now access data inside of the external data source. I am unable.
Ultimately what I want to do is create a list of the componentID variables which are located within the external data source.
Some things I have tried
* output.external: key "0" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result[0]}
* output.external: At column 3, line 1: element: argument 1 should be type list, got type string in:
${element(data.external.component_ids.result["componentID"],0)}
* output.external: key "componentID" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result["componentID"]}
ternal: lookup: lookup failed to find 'componentID' in:
${lookup(data.external.component_ids.*.result[0], "componentID")}
I appreciate the help.
can't test with the variable cloudwatch_endpoint, so I have to think about the solution.
Terraform can't decode json directly before 0.11.x. But there is a workaround to work on nested lists.
Your ruby need be adjusted to make output as variable http below, then you should be fine to get what you need.
$ cat main.tf
variable "http" {
type = "list"
default = [{componentID = "k8QEbeuHdDnU", name = "Jenkins"}]
}
output "http" {
value = "${lookup(var.http[0], "componentID")}"
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
http = k8QEbeuHdDnU

How to stop ImageMagick in Ruby (Rmagick) evaluating an # sign in text annotation

In an app I recently built for a client the following code resulted in the variable #nameText being evaluated, and then resulting in an error 'no text' (since the variable doesn't exist).
To get around this I used gsub, as per the example below. Is there a way to tell Magick not to evaluate the string at all?
require 'RMagick'
#image = Magick::Image.read( '/path/to/image.jpg' ).first
#nameText = '#SomeTwitterUser'
#text = Magick::Draw.new
#text.font_family = 'Futura'
#text.pointsize = 22
#text.font_weight = Magick::BoldWeight
# Causes error 'no text'...
# #text.annotate( #image, 0,0,200,54, #nameText )
#text.annotate( #image, 0,0,200,54, #nameText.gsub('#', '\#') )
This is the C code from RMagick that is returning the error:
// Translate & store in Draw structure
draw->info->text = InterpretImageProperties(NULL, image, StringValuePtr(text));
if (!draw->info->text)
{
rb_raise(rb_eArgError, "no text");
}
It is the call to InterpretImageProperties that is modifying the input text - but it is not Ruby, or a Ruby instance variable that it is trying to reference. The function is defined here in the Image Magick core library: http://www.imagemagick.org/api/MagickCore/property_8c_source.html#l02966
Look a bit further down, and you can see the code:
/* handle a '#' replace string from file */
if (*p == '#') {
p++;
if (*p != '-' && (IsPathAccessible(p) == MagickFalse) ) {
(void) ThrowMagickException(&image->exception,GetMagickModule(),
OptionError,"UnableToAccessPath","%s",p);
return((char *) NULL);
}
return(FileToString(p,~0,&image->exception));
}
In summary, this is a core library feature which will attempt to load text from file (named SomeTwitterUser in your case, I have confirmed this -try it!), and your work-around is probably the best you can do.
For efficiency, and minimal changes to input strings, you could rely on the selectivity of the library code and only modify the string if it starts with #:
#text.annotate( #image, 0,0,200,54, #name_string.gsub( /^#/, '\#') )

Forcing a package's function to use user-provided function

I'm running into a problem with the MNP package which I've traced to an unfortunate call to deparse (whose maximum width is limited to 500 characters).
Background (easily skippable if you're bored)
Because mnp uses a somewhat idiosyncratic syntax to allow for varying choice sets (you include cbind(choiceA,choiceB,...) in the formula definition), the left hand side of my formula call is 1700 characters or so when model.matrix.default calls deparse on it. Since deparse supports a maximum width.cutoff of 500 characters, the sapply(attr(t, "variables"), deparse, width.cutoff = 500)[-1L] line in model.matrix.default has as its first element:
[1] "cbind(plan1, plan2, plan3, plan4, plan5, plan6, plan7, plan8, plan9, plan10, plan11, plan12, plan13, plan14, plan15, plan16, plan17, plan18, plan19, plan20, plan21, plan22, plan23, plan24, plan25, plan26, plan27, plan28, plan29, plan30, plan31, plan32, plan33, plan34, plan35, plan36, plan37, plan38, plan39, plan40, plan41, plan42, plan43, plan44, plan45, plan46, plan47, plan48, plan49, plan50, plan51, plan52, plan53, plan54, plan55, plan56, plan57, plan58, plan59, plan60, plan61, plan62, plan63, "
[2] " plan64, plan65, plan66, plan67, plan68, plan69, plan70, plan71, plan72, plan73, plan74, plan75, plan76, plan77, plan78, plan79, plan80, plan81, plan82, plan83, plan84, plan85, plan86, plan87, plan88, plan89, plan90, plan91, plan92, plan93, plan94, plan95, plan96, plan97, plan98, plan99, plan100, plan101, plan102, plan103, plan104, plan105, plan106, plan107, plan108, plan109, plan110, plan111, plan112, plan113, plan114, plan115, plan116, plan117, plan118, plan119, plan120, plan121, plan122, plan123, "
[3] " plan124, plan125, plan126, plan127, plan128, plan129, plan130, plan131, plan132, plan133, plan134, plan135, plan136, plan137, plan138, plan139, plan140, plan141, plan142, plan143, plan144, plan145, plan146, plan147, plan148, plan149, plan150, plan151, plan152, plan153, plan154, plan155, plan156, plan157, plan158, plan159, plan160, plan161, plan162, plan163, plan164, plan165, plan166, plan167, plan168, plan169, plan170, plan171, plan172, plan173, plan174, plan175, plan176, plan177, plan178, plan179, "
[4] " plan180, plan181, plan182, plan183, plan184, plan185, plan186, plan187, plan188, plan189, plan190, plan191, plan192, plan193, plan194, plan195, plan196, plan197, plan198, plan199, plan200, plan201, plan202, plan203, plan204, plan205, plan206, plan207, plan208, plan209, plan210, plan211, plan212, plan213, plan214, plan215, plan216, plan217, plan218, plan219, plan220, plan221, plan222, plan223, plan224, plan225, plan226, plan227, plan228, plan229, plan230, plan231, plan232, plan233, plan234, plan235, "
[5] " plan236, plan237, plan238, plan239, plan240, plan241, plan242, plan243, plan244, plan245, plan246, plan247, plan248, plan249, plan250, plan251, plan252, plan253, plan254, plan255, plan256, plan257, plan258, plan259, plan260, plan261, plan262, plan263, plan264, plan265, plan266, plan267, plan268, plan269, plan270, plan271, plan272, plan273, plan274, plan275, plan276, plan277, plan278, plan279, plan280, plan281, plan282, plan283, plan284, plan285, plan286, plan287, plan288, plan289, plan290, plan291, "
[6] " plan292, plan293, plan294, plan295, plan296, plan297, plan298, plan299, plan300, plan301, plan302, plan303, plan304, plan305, plan306, plan307, plan308, plan309, plan310, plan311, plan312, plan313)"
When model.matrix.default tests this against the variables in the data.frame, it returns an error.
The problem
To get around this, I've written a new deparse function:
deparse <- function (expr, width.cutoff = 60L, backtick = mode(expr) %in%
c("call", "expression", "(", "function"), control = c("keepInteger",
"showAttributes", "keepNA"), nlines = -1L) {
ret <- .Internal(deparse(expr, width.cutoff, backtick, .deparseOpts(control), nlines))
paste0(ret,collapse="")
}
However, when I run mnp again and step through, it returns the same error for the same reason (base::deparse is being run, not my deparse).
This is somewhat surprising to me, as what I expect is more typified by this example, where the user-defined function temporarily over-writes the base function:
> print <- function() {
+ cat("user-defined print ran\n")
+ }
> print()
user-defined print ran
I realize the right way to solve this problem is to rewrite model.matrix.default, but as a tool for debugging I'm curious how to force it to use my deparse and why the anticipated (by me) behavior is not happening here.
The functions fixInNamespace and assignInNamespace are provided to allow editing of existing functions. You could try ... but I will not since mucking with deparse looks too dangerous:
assignInNamespace("deparse",
function (expr, width.cutoff = 60L, backtick = mode(expr) %in%
c("call", "expression", "(", "function"), control = c("keepInteger",
"showAttributes", "keepNA"), nlines = -1L) {
ret <- .Internal(deparse(expr, width.cutoff, backtick, .deparseOpts(control), nlines))
paste0(ret,collapse="")
} , "base")
There is an indication on the help page that the use of such functions has restrictions and I would not be surprised that such core function might have additional layers of protection. Since it works via side-effect, you should not need to assign the result.
This is how packages with namespaces search for functions, as described in Section 1.6, Package Namespaces of Writing R Extensions
Namespaces are sealed once they are loaded. Sealing means that imports
and exports cannot be changed and that internal variable bindings
cannot be changed. Sealing allows a simpler implementation strategy
for the namespace mechanism. Sealing also allows code analysis and
compilation tools to accurately identify the definition corresponding
to a global variable reference in a function body.
The namespace controls the search strategy for variables used by
functions in the package. If not found locally, R searches the package
namespace first, then the imports, then the base namespace and then
the normal search path.

Resources