I have a define like below,
define module::define1 (
$path = undef,
$someparam = undef,
...
) {
unless defined(Define2[$someparam]) {
fail("Please create define2[$someparam] first")
}
file { $path:
ensure => directory,
}
}
When I try to test the existence of file[$path] I am getting error "Please create define2[$someparam] first". But define2 is in another file.
How can I include define2 inside my define1's rspec scope?
I believe that you are hitting a severe limitation of your manifest design. Expressing cross-dependencies through the defined function is very fragile and bound to lead to issues.
Have you considered replacing your checks with constraints? (The module documentation also elaborates a bit on why defined is often a bad idea.)
Related
In Rspec, let uses lazy instantiation so let(:foo) { create(...) } isn't initialised until something calls it. Usually this is good, because it is only used when needed and makes rspec testing times much quicker.
Occasionally however you will have a spec that needs that variable but doesn't explicitly call it. So with lazy instantiation, the spec will fail.
A solution is with a bang! let!(:foo) { create(...) } will force that the variable is initialised.
Some developers seem to be very against this and prefer:
let(:foo) { create(...) }
before do
foo
end
to force the initialisation.
Is there a reason for this? is there any difference between the two methods?
I can think of one difference: before blocks would compound, and you can overwrite let! with let and vice versa. I'll give you an example:
context do
let!(:foo) { create(:foo) }
let(:bar) { create(:bar) }
before { bar }
context do
# if in this context you wish to switch back to "lazy" versions you can
# do that for :foo, just do:
let(:foo) { create(:foo) }
# but you can't "undo" before, even if you define an empty one:
before { }
# it does not cancel the `before` blocks defined in higher contexts
end
end
Edit: I just realized this does not really answer the question why someone would prefer before to let!. Maybe: as mentioned in comments the order is different, but if you depend on such nuance in your specs - it's already too complicated.
Many situations is a matter of style, and the developer is not full aware of main functionalities of RSpec and many times people just don't make sense. Humans are not machines, and specially under time pressure, developers do things that they wouldn't do in ideal conditions :).
But the both cases presented they are not strictly the same.
For example, if you are using subject, it is evaluated in a before hook before the let! initialization and not inside it. I didn't test, but I believe these cases should show the diffs:
let!(:car) { create(:car) }
let(:driver) { create(:driver) }
subject { driver.car() }
it { expect(subject).to eq car } # Fail:
This forces car being created before and being available for subject:
let(:driver) { create(:driver) }
subject { driver.car() }
before { create(:car) }
it { expect(subject).to eq car } # Success
I'm rewriting a framework from Perl5 to Perl6 for my work purposes. At some place I need to collect information from other modules/classes by executing a public sub they might provide; or they may not. So, it necessary to find out if the sub is present. This is not a big deal when a module is referenced directly (Foo::<&my-sub>) or by a symbolic name in a string (&::("Foo")::my-sub). But for the simplicity of it I would like to allow to pass module names as-is (lets say collector is the method collecting the info):
self.collector( Foo );
Where Foo could be the following:
module Foo {
use Bar;
use Baz;
our sub my-sub { Bar, 'Baz' }
}
And this is where I'm missing something important from Perl6 syntax because the following:
method collector ( $mod ) {
my $mod-name = $mod.WHO;
my #mods;
with &::($mod-name)::my-sub {
#mods.push: &$_();
}
}
is currently the only way I can perform the task.
I didn't try a type capture yet though. Should work as expected, I guess. So, the question is more about extending my knowelge of the syntax.
The final solution from the exchange with Vadim in the comments on their question. It's arguably insane. They think it's beautiful. And who am I to argue? .oO( Haha, hoho, heehee... )
my $pkg-arg = (Int, 'Int').pick;
my \pkg-sym = $pkg-arg && ::($pkg-arg);
my \sub-ref = &pkg-sym::($subname);
There are two obviously useful ways to refer to a package:
Its symbolic name. Int is the symbolic name of the Int class.
Its string name. 'Int' is the string name of the Int class.
Vadim, reasonably enough, wants a solution for both.
In the solution in this answer I simulate the two types of argument by randomly picking one and assigning it to $pkg-arg:
my $pkg-arg = (Int, 'Int').pick;
Now we need to normalize. If we've got a symbolic name we're good to go. But if it's a string name, we need to turn that into the symbolic name.
Vadim showed a couple ways to do this in the comments on their question. This solution uses a third option:
my \pkg-sym = $pkg-arg && ::($pkg-arg);
If $pkg-arg is a symbolic name, it'll be False. With a False LHS the && short-circuits and returns its LHS. If $pkg-arg is a string name, then the && will instead return its RHS, which is ::($pkg-arg) which is a symbol lookup using $pkg-arg as a string name.
The upshot is that pkg-sym ends up containing a package symbolic name (or a Failure if the lookup failed to find a matching symbolic name).
Which leaves the last line. That looks for a sub named $subname in the package pkg-sym:
my \sub-ref = &pkg-sym::($subname);
The & is needed to ensure the RHS is treated as a reference rather than as an attempt to call a routine. And pkg-sym has to be a sigilless identifier otherwise the code won't work.
At the end of these three lines of code sub-ref contains either a Failure or a reference to the wanted sub.
While I was able to define methods, defining variables using Jenkins Shared library seems to an unresolved quest so far.
I added /vars/True.groovy with this body
def call() {
return true
}
And now inside Jenkinsfile, I tried to test if it works as expected:
println "evaluation ${ true == True }"
But surprise, this fails as it considers true != class True. My impression is that true == True() may work but that's not the point, I need a real variable because that whole purpose was to avoid some errors caused by people using wrong case.
So what is the magic trick?
I've found a way to achieve this, but with a slight caveat: the variables must be defined/wrapped within a class. However, this does have the upside of providing better organization, in order not to pollute the global space too much.
For example, we often reuse four standard "magic strings" for build statuses, which I wanted to save as global constants to facilitate interoperability. So I created a global status class, defined in vars/status.groovy:
class status {
final String STARTED = "STARTED"
final String SUCCESS = "SUCCESS"
final String FAILURE = "FAILURE"
final String ABORTED = "ABORTED"
}
The constants can then be used by referring to their parent class:
echo status.STARTED
echo status.SUCCESS
echo status.FAILURE
echo status.ABORTED
Specials thanks to #mkobit for pointing me in the right direction!
It looks like Global Variables defined in the vars directory must be lower/camel/maybe some other special casing. This isn't stated anywhere in on the Defining global variables section, but there is this note at the top:
The vars directory hosts scripts that define global variables accessible from Pipeline. The basename of each *.groovy file should be a Groovy (~ Java) identifier, conventionally camelCased. The matching *.txt, if present, can contain documentation, processed through the system’s configured markup formatter (so may really be HTML, Markdown, etc., though the txt extension is required).
Here is what I tried:
vars/MyTrue.groovy
class MyTrue implements Serializable {
}
vars/myTrue.groovy
class myTrue implements Serializable {
}
vars/mytrue.groovy
class mytrue implements Serializable {
}
vars/doesCasingMatter.groovy
class DoesCasingMatter implements Serializable {
}
And in my pipeline script to test if they are instances or Class types (no script security enabled here):
echo("MyTrue: ${Class.isInstance(MyTrue)}")
echo("myTrue: ${Class.isInstance(myTrue)}")
echo("mytrue: ${Class.isInstance(mytrue)}")
echo("What bindings are there?: ${binding.variables}")
This prints out:
[Pipeline] echo
MyTrue: true
[Pipeline] echo
myTrue: false
[Pipeline] echo
mytrue: false
[Pipeline] echo
What bindings are there?: [steps:org.jenkinsci.plugins.workflow.cps.DSL#e96256, myTrue:myTrue#8a1ddc5, mytrue:mytrue#392ff649]
Which seems to indicate that something about the class name determines how it gets compiled and created. The first example, which is similar to your vars/True.groovy, is only imported as a class and not instantiated. The other two are compiled and instantiated and are bound to the script with their defined class names. I think you will have to define your classes differently if you want them to be global variables.
Variable True is definitely a variable but it holds a reference to object of type True(that you defined in /vars/True.groovy). You have two options
The good one:
Use it this way
println "evaluation ${ true == True() }"
The strange one:
You can override equals() method in /vars/True.groovy
public boolean equals(obj) {
return obj == true;
}
Then this should work
println "evaluation ${ true == True }"
But it would be really strange and can lead to misunderstandings.
I'm new to Dart and just learning the basics.
The Dart-Homepage shows following:
It turns out that Dart does indeed have a way to ask if an optional
parameter was provided when the method was called. Just use the
question mark parameter syntax.
Here is an example:
void alignDingleArm(num axis, [num rotations]) {
if (?rotations) {
// the parameter was really used
}
}
So I've wrote a simple testing script for learning:
import 'dart:html';
void main() {
String showLine(String string, {String printBefore : "Line: ", String printAfter}){
// check, if parameter was set manually:
if(?printBefore){
// check, if parameter was set to null
if(printBefore == null){
printBefore = "";
}
}
String line = printBefore + string + printAfter;
output.appendText(line);
output.appendHtml("<br />\n");
return line;
}
showLine("Hallo Welt!",printBefore: null);
}
The Dart-Editor already marks the questionmark as Error:
Multiple markers at this line
- Unexpected token '?'
- Conditions must have a static type of
'bool'
When running the script in Dartium, the JS-Console shows folloing Error:
Internal error: 'http://localhost:8081/main.dart': error: line 7 pos 8: unexpected token '?'
if(?printBefore){
^
I know, that it would be enough to check if printBefore is null, but I want to learn the language.
Does anyone know the reason for this problem?
How to check, if the parameter is set manually?
The feature existed at some point in Dart's development, but it was removed again because it caused more complication than it removed, without solving the problem that actually needed solving - forwarding of default parameters.
If you have a function foo([x = 42]) and you want a function to forward to it, bar([x]) => f(x);, then, since foo could actually tell if x is passed or not, you actually ended up writing bar([x]) => ?x ? foo(x) : foo();. That was worse than what you had to do without the ?: operator.
Ideas came up about having a bar([x]) => foo(?:x) or something which pased on x if it was present and not if it was absent (I no longer remember the actual proposed syntax), but that got complicated fast, fx converting named arguments to positional - bar({x,y}) => foo(?:x, ?:y); - what if y was provided and x was not. It was really just a bad solution for a self-inflicted problem.
So, the ?x feature was rolled back. All optional parameters have a default value which is passed if there is no matching argument in a call. If you want to forward an optional parameter, you need to know the default value of the function you are forwarding to.
For most function arguments, the declared default value is null, with an internal if (arg == null) arg = defaultValue; statement to fix it. That means that the null value can be forwarded directly without any confusion.
Some arguments have a non-null default value. It's mostly boolean arguments, but there are other cases too. I recommend using null for everything except named boolean parameters (because they are really meant to be named more than they are meant to be optional). At least unless there is a good reason not to - like ensuring that all subclasses will have the same default value for a method parameter (which may be a good reason, or not, and should be used judiciosuly).
If you have an optional parameter that can also accept null as a value ... consider whether it should really be optional, or if you just need a different function with one more argument. Or maybe you can introduce a different "missing argument" default value. Example:
abstract class C { foo([D something]); }
class _DMarker implements D { const _DMarker(); }
class _ActualC {
foo([D something = const _DMarker()]) {
if (something == const _DMarker()) {
// No argument passed, because user cannot create a _DMarker.
} else {
// Argument passed, may be null.
}
}
}
This is a big workaround, and hardly ever worth it. In general, just use null as default value, it's simpler.
I was trying something similar:
This does not work
widget.optionalStringParameter ? widget.optionalStringParameter : 'default string'
This works
widget.optionalStringParameter != null ? widget.optionalStringParameter : 'default string'
This also works
widget.optionalStringParameter ?? 'default string'
There was support for checking if an optional parameter was actually provider in early Dart days (pre 1.0) but was removed because it causes some troubles.
For a field inside a deeply nested table, for example, text.title.1.font. Even if you use
if text.title.1.font then ... end
it would result in an error like "attempt to index global 'text' (a nil value)" if any level of the table does not actually exists. Of course one may tried to check for the existence of each level of the table, but it seems rather cumbersome. I am wondering is there a safe and prettier way to handle this, such that when referencing such an object, nil would be the value instead of triggering an error?
The way to do this that doesn't invite lots of bugs is to explicitly tell Lua which fields of which tables should be tables by default. You can do this with metatables. The following is an example, but it should really be customized according to how you want your tables to be structured.
-- This metatable is intended to catch bugs by keeping default tables empty.
local default_mt = {
__newindex =
function()
error(
'This is a default table. You have to make nested tables the old-fashioned way.')
end
}
local number_mt = {
__index =
function(self, key)
if type(key) == 'number' then
return setmetatable({}, default_mt)
end
end
}
local default_number_mt = {
__index = number_mt.__index,
__newindex = default_mt.__newindex
}
local title_mt = {__index = {title = setmetatable({}, default_number_mt)}}
local text = setmetatable({}, title_mt)
print(text.title[1].font)
Egor's suggestion debug.setmetatable(nil, {__index = function()end}) is the easiest to apply. Keep in mind that it's not lexically scoped, so, once it's on, it will be "on" until turned off, which may have unintended consequences in other parts of your code. See this thread for the discussion and some alternatives.
Also note that text.title.1.font should probably be text.title[1].font or text.title['1'].font (and these two are not the same).
Another, a bit more verbose, but still usable alternative is:
if (((text or {}).title or {})[1] or {}).font then ... end