Can I tell pylint about a specific param a decorator requires and have it not apply unused-argument? - pylint

I use pyinvoke which has a task decorator that works like this:
#task
def mycommand(
# MUST include context param even if its not used
ctx: Context
):
# Do stuff, but don't use ctx
Even if I don't use ctx I must include it for pyinvoke to work correctly. Pylint throws Unused argument 'ctx' Pylint(W0613:unused-argument).
From what I have read in GitHub issues it seems like it would be unreasonable to expect pylint to dig into decorators and figure them all out automatically.
I also don't want to turn off this pylint rule for the entire function.
Is there a way I can tell pylint that if the #task decorator is used do not apply the W0613 rule to the first argument of the function?

When there is code that is too dynamic and impossible to parse for pylint it's possible to create a "brain" i.e. a simpler version that will explain what the code does to astroid (the internal code representation of pylint). Generally this is what a pylint plugin does (for example pylint-django will do it for view function that need request, which is similar to your issue with ctx). Here's an example of brain for signal directly in astroid and the documentation. It's possible that a pylint plugin already exists so you don't have to do this yourself.

Related

What to put in rspec's expect for a globally available object?

I am writing a custom matcher for my logging output, mostly so that I can customize the error output to be more readable and helpful.
The thing being examined for the test is the array returned by LoggingSpecHelper.log_events, a module class method (i.e. not a module instance method). Therefore, it is available without the need for it to be passed as a parameter.
LoggingSpecHelper.log_events is kind of long to specify in each expectation, and anyway I'd prefer to hide that implementation detail from the caller, in case that implementation changes. That leads to my using expect(:logging), where :logging is a dummy value which has no meaning and is not examined. This, however, is awkward and confusing, leaving the reader scratching their head thinking "he's examining a symbol?"
Here is an example of how it is currently called; in this case I'm looking to see if a fatal error occurs in the log that contains 'something is misconfigured' (not a real production message):
expect(:logging).to have_log_output_match(:fatal, 'something is misconfigured')
The long form of this, and what is happening in the matcher, is below, but if I do this instead of calling the matcher, I don't get to see what is in the log if an error occurs (the matcher includes log content in the failure message):
expect(LoggingSpecHelper.log_events.count do |event|
event.level == :fatal && /something is misconfigured/.match?(event.message)
end).to be == 1
How would you suggest I handle this? I don't think shared examples are what I want here because this log expectation might not be the only expectation in the example.
As an aside, I should probably rename the matcher so that it is clear that I am testing for 1 and not >=1 occurrences.

How can I mock a Ruby "require" statement in RSpec?

I have a Ruby cli program that can optionally load a user-specified file via require. I would like to unit test this functionality via RSpec. The obvious thing to do is to mock the require and verify that it happened. Something like this:
context 'with the --require option' do
let(:file) { "test_require.rb" }
let(:args) { ["--require", "#{file}"] }
it "loads the specified file"
expect(...something...).to receive(:require).with(file).and_return(true)
command.start(args)
end
end
(That's just typed, not copy/pasted - the actual code would obscure the question.)
No matter what I try, I can't capture the require, even though it's occurring (it raises a LoadError, so I can see that). I've tried a variety of things, including the most obvious:
expect(Kernel).to receive(:require).with(file).and_return(true)
or even:
let(:kernel_class) { class_double('Kernel') }
kernel_class.as_stubbed_const
allow(Kernel).to receive(:require).and_call_original
allow(Kernel).to receive(:require).with(file).and_return(true)
but nothing seems to hook onto the require
Suggestions?
So require is defined by Kernel but Kernel is included in Object so when you call require inside this context it is not necessarily the Kernel module that is processing the statement.
Update
I am not sure if this exactly solves your issue but it does not suffer from the strange behavior exhibited below:
file = 'non-existent-file'
allow(self).to receive(:require).with(file).and_return(true)
expect(self).to receive(:require).with(file)
expect(require file).to eq(true)
Working Example
OLD Answer:
This is incorrect and exists only for posterity due to the up-votes received. Some how works without the allow. Would love it if someone could explain why as I assumed it should raise instead. I believe the issue to be related to and_return where this is not part of the expectation. My guess is we are only testing that self received require, with_file, and that the and_return portion is just a message transmission (thus my updated answer)
You can still stub this like so:
file = 'non-existent-file.rb'
allow_any_instance_of(Kernel).to receive(:require).with(file).and_return(true)
expect(self).to receive(:require).with(file).and_return(true)
require file
Since I am unclear on your exact implementation since you have obfuscated it for the question I cannot solve your exact issue.

How do I programmatically set a content_security_policy?

I'm configuring the Content Security Policy for our Rails 5.2 app. I need to whitelist some domains in our CSP. I'd like to put the list of domains elsewhere so I can reference them in other places in the application, then generate the CSP headers programmatically from that list.
Looking at the source code for the Content Security Policy configuration mechanisms in Rails 5, it looks like there's some magic metaprogramming going on, so it's not clear to me how to accomplish what I need to do. It looks like the functions I need to call to set headers might be picky about how exactly they want to be called. In particular, it's not clear to me if I can pass them arrays or safely call them multiple times, or if they do some metaprogramming magic that only works if the domains are passed in as individual function arguments.
Can I pass in an array to the header I want to set, like this?
whitelisted_domains = ['https://example.com', 'self']
Rails.application.configure do
config.content_security_policy do |csp|
csp.child_src whitelisted_domains
end
end
Or can I call the same function multiple times, like this?
whitelisted_domains = ['https://example.com', 'self']
Rails.application.configure do
config.content_security_policy do |csp|
whitelisted_domains.each {|domain| csp.child_src domain}
end
end
If neither of those will work, what's the best way of accomplishing what I want to do?
From what I can tell from sourcecode and documentation, it takes an array. From the edgeguides at rails, posting following
Rails.application.config.content_security_policy do |policy|
policy.default_src :self, :https
...
end
and the sourcecode, using *sources as param; it believe it takes any number of arguments, meaning you could do something along the lines of;
whitelisted_domains = ['https://example.com', 'self']
Rails.application.configure do
config.content_security_policy do |csp|
csp.child_src(*whitelisted_domains)
end
end
https://blog.sqreen.io/integrating-content-security-policy-into-your-rails-applications-4f883eed8f45/
https://edgeguides.rubyonrails.org/security.html#content-security-policy
Sourcecode of define_method for each directive
https://github.com/rails/rails/blob/master/actionpack/lib/action_dispatch/http/content_security_policy.rb#L151
(note: None of this has been tested in a Rails app, simple looking guides and source code of Rails)

Conditional routes and bot name in Lita

I am trying to develop a simple Lita chat bot with more flexible command routing.
There are a couple of issues I am having difficulties with.
1. Conditional routing
How can I use config values before or inside route definitions?
For example, instead of this definition that needs a "run" prefix:
route(/^\s*run\s+(\S*)\s*(.*)$/, :cmd, command: true)
I would like to use something like this, with a flexible, config-based prefix:
route(/^\s*#{config.prefix}\s+(\S*)\s*(.*)$/, :cmd, command: true)
Which fails. So I also tried something like this:
if config.use_prefix
route(/^\s*run\s+(\S*)\s*(.*)$/, :cmd, command: true)
else
route(/^\s*(\S*)\s*(.*)$/, :cmd, command: true)
end
Which also fails with a not very helpful error.
In both cases, I defined the proper config key with config :prefix and config :use_prefix.
2. Showing the bot name in the help
I know there is a robot.name property available for me inside the executed command, but I was unable to use it inside of the help string. I was trying to achieve something like this:
route(/^\s*run\s+(\S*)\s*(.*)$/, :cmd, command: true, help: {
"run SCRIPT" => "run the specified SCRIPT. use `#{robot.name} run list` for a list of available scripts."
})
but it just printed something unexpected.
Any help is appreciated.
The issue is that you're confusing the config class method and the config instance method. config at the class level (code in the class body but not inside an instance method definition) defines a new configuration attribute for the plugin. config at the instance level (inside an instance method or in an inline callback provided to route using a block) accesses the values of the plugin's own configuration at runtime.
In the current version of Lita, there isn't a pretty way to use runtime configuration in class-level definitions like chat routes. The workaround I've used myself is to register an event listener for the :loaded event, which triggers when the Lita::Robot has been initialized. At this point, configuration has been finalized, and you can use it to define more routes.
For example:
class MyHandler < Lita::Handler
on :loaded, :define_dynamic_routes
def define_dynamic_routes(payload)
if config.some_setting
self.class.route(/foo/, :callback)
else
self.class.route(/bar/, :callback)
end
end
end
You can look at the code for lita-karma for a more detailed example, as it uses this pattern.
The next major version of Lita is going to include an overhaul to the plugin system which will make this pattern much easier. For now, this is what I'd recommend, though.

Is there any way to delay a resource's attribute resolution until the "execute" phase?

I have two LWRPs. The first deals with creating a disk volume, formatting it, and mounting it on a virtual machine, we'll call this resource cloud_volume. The second resource (not really important what it does) needs a UUID for the newly formatted volume which is a required attribute, we'll call this resource foobar.
The resources cloud_volume and foobar are used in a recipe something like the following.
volumes.each do |mount_point, volume|
cloud_volume "#{mount_point}" do
size volume['size']
label volume['label']
action [:create, :initialize]
end
foobar "#{mount_point}" do
disk_uuid node[:volumes][mount_point][:uuid] # This is set by cloud_volume
action [:do_stuff]
end
end
So, when I do a chef run I get a Required argument disk_identifier is missing! exception.
After doing some digging I discovered that recipes are processed in two phases, a compile phase and an execute phase. It looks like the issue is at compile time as that is the point in time that node[:volumes][mount_point][:uuid] is not set.
Unfortunately I can't use the trick that OpsCode has here as notifications are being used in the cloud_volume LWRP (so it would fall into the anti-pattern shown in the documentation)
So, after all this, my question is, is there any way to get around the requirement that the value of disk_uuid be known at compile time?
A cleaner way would be to use Lazy Attribute Evaluation. This will evaluate node[:volumes][mount_point][:uuid] during execution time instead of compile
foobar "#{mount_point}" do
disk_uuid lazy { node[:volumes][mount_point][:uuid] }
action [:do_stuff]
end
Disclaimer: this is the way to go with older Chef (<11.6.0), before they added lazy attribute evaluation.
Wrap your foobar resource into ruby_block and define foobar dynamically. This way after the compile stage you will have a ruby code in resource collection and it will be evaluated in run stage.
ruby_block "mount #{mount_point} using foobar" do
block do
res = Chef::Resource::Foobar.new( mount_point, run_context )
res.disk_uuid node[:volumes][mount_point][:uuid]
res.run_action :do_stuff
end
end
This way node[:volumes][mount_point][:uuid] will not be known at compile time, but it also will not be accessed at compile time. It will only be accessed in running stage, when it should already be set.

Resources