I'm using UVM command line arguments to set configuration properties in the UVM hierarchy.
When I pass in a bad config option, I would like to see a UVM_ERROR or another failure indication. What's the easiest way to accomplish this?
For example, if I pass in a bad option like:
+uvm_set_config_int=bad,mode,5
The sim completes, and I do not see any indication from the log that the option was bad:
# UVM_INFO # 0: reporter [UVM_CMDLINE_PROC] Applying config setting from the command line: +uvm_set_config_int=bad,mode,5
Full code can be run here: http://www.edaplayground.com/s/4/673
I'm not really sure what you mean with a bad configuration option. When you do a uvm_set_config_int, your first two arguments only specify the instance and the field names. There is no requirement that these two actually exist. You're basically just putting this configuration option in the config DB to be accessed later.
What you probably want is to put a check in your agent that makes sure that it is actually passed a value for its 'mode' field.
class my_agent extends uvm_agent;
//...
function void build_phase(uvm_phase phase);
if (!uvm_config_db #(int)::get(this, "", "mode", mode))
`uvm_fatal("CFGERR", "Agent was not passed a config")
endfunction
endclass
I tested this on EDAPlayground with your code, but I'm not sure if it got saved.
UVM can output additional information on configuration settings using the static uvm_component::print_config_matches bit.
In the example, set the following in your testbench:
uvm_component::print_config_matches = 1;
For a "good" configuration setting, you will see the following in the output:
# UVM_INFO # 0: reporter [UVM_CMDLINE_PROC] Applying config setting from the command line: +uvm_set_config_int=*,mode,5
# UVM_INFO # 0: uvm_test_top.my_agent [CFGAPL] applying configuration settings
# UVM_INFO # 0: uvm_test_top.my_agent [CFGAPL] applying configuration to field mode
# UVM_INFO testbench(40) # 0: uvm_test_top [my_test] Running test with mode 5
For a "bad" configuration setting, you will see the following in the output:
# UVM_INFO # 0: reporter [UVM_CMDLINE_PROC] Applying config setting from the command line: +uvm_set_config_int=bad,mode,5
# UVM_INFO # 0: uvm_test_top.my_agent [CFGAPL] applying configuration settings
# UVM_INFO testbench(40) # 0: uvm_test_top [my_test] Running test with mode 0
So, now you can parse the output and check that that there is at least one [CFGAPL] applying configuration to field that follows every [UVM_CMDLINE_PROC] Applying config settings.
Modified code example: http://www.edaplayground.com/s/4/681
I created a uvm_component that can flag a bad +uvm_set_config_ option. It throws a UVM_ERROR if a bad option was passed in, like:
# UVM_ERROR cmd_line_checker.svh(112) # 0: uvm_test_top.cmd_line_checker [BAD_CONFIG] UVM match for command line config bad,mode not found
Full example can be run here: http://www.edaplayground.com/s/4/766
The code:
/**
* This is a utility class to validate command line arguments in the form:
* +uvm_set_config_int=<inst_name>,<field_name>,<value>
* +uvm_set_config_string=<inst_name>,<field_name>,<value>
*/
class cmd_line_checker extends uvm_component;
/**
* The enable for this checker.
*/
bit enable = 1'b1;
`uvm_component_utils_begin(cmd_line_checker)
`uvm_field_int(enable, UVM_ALL_ON)
`uvm_component_utils_end
/**
* UVM constructor.
*/
function new(string name, uvm_component parent);
super.new(name, parent);
endfunction
/**
* UVM connect phase.
*/
function void connect_phase(uvm_phase phase);
if (enable) begin
check_command_line();
end
endfunction
/**
* Validate all command line arguments in the form:
* +uvm_set_config_int=<inst_name>,<field_name>,<value>
* +uvm_set_config_string=<inst_name>,<field_name>,<value>
*/
function void check_command_line();
string args[$];
uvm_root root = uvm_root::get();
void'(root.clp.get_arg_matches(
"/^\\+(UVM_SET_CONFIG_INT|uvm_set_config_int)=/",args));
foreach(args[i]) begin
check_config(args[i].substr(20, args[i].len()-1));
end
void'(root.clp.get_arg_matches(
"/^\\+(UVM_SET_CONFIG_STRING|uvm_set_config_string)=/",args));
foreach(args[i]) begin
check_config(args[i].substr(23, args[i].len()-1));
end
endfunction
/**
* Check a single command line argument.
* The instance name and field name should exist.
* #param cfg the command line argument in the form:
* <inst_name>,<field_name>,<value>
*/
function void check_config(string cfg);
string split_val[$];
string inst_name;
string field_name;
uvm_root root;
uvm_component components[$];
bit match_found;
uvm_split_string(cfg, ",", split_val);
inst_name = split_val[0];
field_name = split_val[1];
`uvm_info("CHECK_CONFIG",
$sformatf("checking inst_name:%s, field_name:%s",
inst_name, field_name), UVM_HIGH);
// Get every object in uvm hierarchy that matches
root = uvm_root::get();
root.find_all(inst_name, components);
// If object matches inst_name, check whether a match for field_name exists
foreach (components[i]) begin
if (match_found) begin
break;
end else begin
uvm_component component = components[i];
uvm_status_container status = component.__m_uvm_status_container;
component.__m_uvm_field_automation (null, UVM_CHECK_FIELDS, "");
if (uvm_has_wildcard(field_name)) begin
foreach (status.field_array[name]) begin
if (!(uvm_re_match(uvm_glob_to_re(field_name), name))) begin
match_found = 1;
break;
end
end
end else begin
// No wildcards to match
match_found = status.field_array[field_name];
end
status.field_array.delete();
if (match_found) begin
`uvm_info("MATCH_FOUND", $sformatf(
"UVM match for command line config %s,%s found in %s",
inst_name, field_name, component.get_full_name()), UVM_HIGH);
break;
end
end
end
if (!match_found) begin
`uvm_error("BAD_CONFIG",
$sformatf("UVM match for command line config %s,%s not found",
inst_name, field_name));
end
endfunction
endclass
SVUnit test for the above cmd_line_checker here: http://www.edaplayground.com/s/4/768
Related
We have a large project that has multiple separate declarative pipeline file definitions. This is used to build different apps and installers from the single code base.
Right now, all of these files contain a large block of "code" used to generate the email body and JIRA update messages. examples:
// Get a JIRA's to add Comments to
// Return map of JIRA id to comment text from all commits for that JIRA
#NonCPS
def getJiraMap() {
a bunch of stuff
return jiraset
}
// Get the body text for the emails
def getMailBody1() {
return "See: ${BUILD_URL}\n\nChanges:\n" + getChangeString() + "\n" + testStatuses()
}
etc...
What I would like to do is have all these common methods in a separate file that all the other pipeline files can include. This seems like it SHOULD be easy, but all examples I've found appear to be rather complex involving a separate SCM - which is NOT what I want.
Updates:
Going through the various suggestions given in that link, I make the following file - BuildTools.groovy: Note that this file is in the same directory as the jenkins pipeline file that uses it.
import hudson.tasks.test.AbstractTestResultAction
import hudson.model.Actionable
Class BuildTools {
// Get a JIRA's to add Comments to
// Return map of JIRA id to comment text from all commits for that JIRA
#NonCPS
def getJiraMap() {
def jiraset = [:]
.. whole bunch of stuff ..
Here are the various things I've tried, and the results.
File sourceFile = new File("./AutomatedBuild/BuildTools.groovy");
Class gcl = new GroovyClassLoader(getClass().getClassLoader()).parseClass(sourceFile);
GroovyObject bt = (GroovyObject) gcl.newInstance();
Fails with:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method java.lang.Class getClassLoader
evaluate(new File("./AutomatedBuild/BuildTools.groovy"))
def bt = new BuildTools()
Fails with:
15:29:07 WorkflowScript: 8: unable to resolve class BuildTools
15:29:07 # line 8, column 10.
15:29:07 def bt = new BuildTools()
15:29:07 ^
import BuildTools
def bt = new BuildTools()
Fails with:
15:35:58 WorkflowScript: 16: unable to resolve class BuildTools (note that BuildTools.groovy is in the same folder as this script)
15:35:58 # line 16, column 1.
15:35:58 import BuildTools
15:35:58 ^
GroovyShell shell = new GroovyShell()
def bt = shell.parse(new File("./AutomatedBuild/BuildTools.groovy"))
Fails with:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use new groovy.lang.GroovyShell
I cannot explain why I get a pylint "<class 'AttributeError'>: 'For' object has no attribute 'targets'" warning when I create enum entries dynamically.
I cannot see any reason for the the warning in my code.
from aenum import IntEnum
class Commands(IntEnum):
_ignore_ = 'Commands index'
_init_ = 'value string'
BEL = 0x07, 'Bell'
Commands = vars()
for index in range(4):
Commands[f'DC{index + 1}'] = 0x11 + index, f'Device Control {index + 1}'
for command in Commands:
print(f"0x{command.value:02X} is {command.string}")
The code works fine but I do NOT expect a warning!
The code is OK the bug was within the toolchain:
https://github.com/PyCQA/pylint/issues/2719
I am trying to create a custom fact I can use as the value for a class parameter in a hiera yaml file.
I am using the openstack/puppet-keystone module and I want to use fernet-keys.
According to the comments in the module I can use this parameter.
# [*fernet_keys*]
# (Optional) Hash of Keystone fernet keys
# If you enable this parameter, make sure enable_fernet_setup is set to True.
# Example of valid value:
# fernet_keys:
# /etc/keystone/fernet-keys/0:
# content: c_aJfy6At9y-toNS9SF1NQMTSkSzQ-OBYeYulTqKsWU=
# /etc/keystone/fernet-keys/1:
# content: zx0hNG7CStxFz5KXZRsf7sE4lju0dLYvXdGDIKGcd7k=
# Puppet will create a file per key in $fernet_key_repository.
# Note: defaults to false so keystone-manage fernet_setup will be executed.
# Otherwise Puppet will manage keys with File resource.
# Defaults to false
So wrote this custom fact ...
[root#puppetmaster modules]# cat keystone_fernet/lib/facter/fernet_keys.rb
Facter.add(:fernet_keys) do
setcode do
fernet_keys = {}
puts ( 'Debug keyrepo is /etc/keystone/fernet-keys' )
Dir.glob('/etc/keystone/fernet-keys/*').each do |fernet_file|
data = File.read(fernet_file)
if data
content = {}
puts ( "Debug Key file #{fernet_file} contains #{data}" )
fernet_keys[fernet_file] = { 'content' => data }
end
end
fernet_keys
end
end
Then in my keystone.yaml file I have this line:
keystone::fernet_keys: '%{::fernet_keys}'
But when I run puppet agent -t on my node I get this error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, "{\"/etc/keystone/fernet-keys/1\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}, \"/etc/keystone/fernet-keys/0\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}}" is not a Hash. It looks to be a String at /etc/puppetlabs/code/environments/production/modules/keystone/manifests/init.pp:1144:7 on node mgmt-01
I had assumed that I had formatted the hash correctly because facter -p fernet_keys output this on the agent:
{
/etc/keystone/fernet-keys/1 => {
content => "xxxxxxxxxxxxxxxxxxxx="
},
/etc/keystone/fernet-keys/0 => {
content => "xxxxxxxxxxxxxxxxxxxx="
}
}
The code in the keystone module looks like this (with line numbers)
1142
1143 if $fernet_keys {
1144 validate_hash($fernet_keys)
1145 create_resources('file', $fernet_keys, {
1146 'owner' => $keystone_user,
1147 'group' => $keystone_group,
1148 'subscribe' => 'Anchor[keystone::install::end]',
1149 }
1150 )
1151 } else {
Puppet does not necessarily think your fact value is a string -- it might do, if the client is set to stringify facts, but that's actually beside the point. The bottom line is that Hiera interpolation tokens don't work the way you think. Specifically:
Hiera can interpolate values of any of Puppet’s data types, but the
value will be converted to a string.
(Emphasis added.)
I am using the msutter DSC module for puppet. While reading through the source code, I come across code like this (in dsc_configuration_provider.rb):
def create
Puppet.debug "\n" + ps_script_content('set')
output = powershell(ps_script_content('set'))
Puppet.debug output
end
What file defines the powershell function or method? Is it a ruby builtin? A puppet builtin? Inherited from a class? I know that it is being used to send text to powershell as a command and gathering results, but I need to see the source code to understand how to improve its error logging for my purposes, because certain powershell errors are being swallowed and no warnings are being printed to the Puppet log.
These lines in file dsc_provider_helpers.rb may be relevant:
provider.commands :powershell =>
if File.exists?("#{ENV['SYSTEMROOT']}\\sysnative\\WindowsPowershell\\v1.0\\powershell.exe")
"#{ENV['SYSTEMROOT']}\\sysnative\\WindowsPowershell\\v1.0\\powershell.exe"
elsif File.exists?("#{ENV['SYSTEMROOT']}\\system32\\WindowsPowershell\\v1.0\\powershell.exe")
"#{ENV['SYSTEMROOT']}\\system32\\WindowsPowershell\\v1.0\\powershell.exe"
else
'powershell.exe'
end
Surely this defines where the Powershell executable is located, but gives no indication how it is called and how its return value is derived. Are stdout and stderr combined? Am I given the text output or just the error code? etc.
This is core Puppet logic. When a provider has a command, like
commands :powershell => some binary
That is hooked up as a function powershell(*args).
You can see it with other providers like Chocolatey:
commands :chocolatey => chocolatey_command
def self.chocolatey_command
if Puppet::Util::Platform.windows?
# must determine how to get to params in ruby
#default_location = $chocolatey::params::install_location || ENV['ALLUSERSPROFILE'] + '\chocolatey'
chocopath = ENV['ChocolateyInstall'] ||
('C:\Chocolatey' if File.directory?('C:\Chocolatey')) ||
('C:\ProgramData\chocolatey' if File.directory?('C:\ProgramData\chocolatey')) ||
"#{ENV['ALLUSERSPROFILE']}\chocolatey"
chocopath += '\bin\choco.exe'
else
chocopath = 'choco.exe'
end
chocopath
end
Then other locations can just call chocolatey like a function with args:
chocolatey(*args)
In an app I recently built for a client the following code resulted in the variable #nameText being evaluated, and then resulting in an error 'no text' (since the variable doesn't exist).
To get around this I used gsub, as per the example below. Is there a way to tell Magick not to evaluate the string at all?
require 'RMagick'
#image = Magick::Image.read( '/path/to/image.jpg' ).first
#nameText = '#SomeTwitterUser'
#text = Magick::Draw.new
#text.font_family = 'Futura'
#text.pointsize = 22
#text.font_weight = Magick::BoldWeight
# Causes error 'no text'...
# #text.annotate( #image, 0,0,200,54, #nameText )
#text.annotate( #image, 0,0,200,54, #nameText.gsub('#', '\#') )
This is the C code from RMagick that is returning the error:
// Translate & store in Draw structure
draw->info->text = InterpretImageProperties(NULL, image, StringValuePtr(text));
if (!draw->info->text)
{
rb_raise(rb_eArgError, "no text");
}
It is the call to InterpretImageProperties that is modifying the input text - but it is not Ruby, or a Ruby instance variable that it is trying to reference. The function is defined here in the Image Magick core library: http://www.imagemagick.org/api/MagickCore/property_8c_source.html#l02966
Look a bit further down, and you can see the code:
/* handle a '#' replace string from file */
if (*p == '#') {
p++;
if (*p != '-' && (IsPathAccessible(p) == MagickFalse) ) {
(void) ThrowMagickException(&image->exception,GetMagickModule(),
OptionError,"UnableToAccessPath","%s",p);
return((char *) NULL);
}
return(FileToString(p,~0,&image->exception));
}
In summary, this is a core library feature which will attempt to load text from file (named SomeTwitterUser in your case, I have confirmed this -try it!), and your work-around is probably the best you can do.
For efficiency, and minimal changes to input strings, you could rely on the selectivity of the library code and only modify the string if it starts with #:
#text.annotate( #image, 0,0,200,54, #name_string.gsub( /^#/, '\#') )