I am centralizing all particular XPath strings to a resource file and importing the variables in that resource file in my test suite (robot framework). This way, they can be maintained in one place and I can use variable names that make the robot file readable. Is that something that is good practice?
Sometimes I want to pass an argument to the variable, to make it more dynamic. However, the value of the variable contains and XPath, which sometimes has //div[path...etc][text()='MyString'].
Question: In the robot file, how to pass an argument ('MyString') to the Click Element method that is using a variable?
It is certainly a good practice to separate your UI objects technical reference from your test logic. Typically this pattern is called an object store, but other names are used as well.
As for the method of separation, I'd recommend using a YAML variable file over a resource file for static values and including them using the command line argument robot --variablefile MyVariables.yaml MyRobotFile.robot over including a resource file in your test script. This has the added advantage that if you want to switch out your object store because of a different software release then this doesn't require a test script change.
In the event that your variable content changes depending upon some value known when starting robot, then Python Variable Class is a good approach. This is a Python function or Python class that takes an argument and you can use Python to go to a database, file, or use internal logic to determine which variables need to be returned and what value they should hold.
As for adding variable pieces to an xpath without ending up with a lot of specific specialised keywords, I use Custom Locator Strategy functionality from the SeleniumLibrary library. This allows me to use the normal keywords without any additional keywords in the test logic itself.
In the below example a custom locator abc= is created and can be used instead of xpath= for any standard SeleniumLibrary keyword. In this case I use a Dictionary as the Locator object store to hold the ID's and refer to them using a unique name. Note that the abc= is stripped from the value in the ${criteria} argument.
*** Variables ***
&{locators}
... myCustomId1=//*[#id='12234']
... myCustomId2=//*[#id='23455']
*** Test Cases ***
Test Case
Add Location Strategy abc Custom Locator Strategy
Page Should Contain Element abc=myCustomId1
*** Keywords ***
Custom Locator Strategy
[Arguments] ${browser} ${criteria} ${tag} ${constraints}
${WebElement}= Get Webelement xpath=${locators['${criteria}']}
[Return] ${WebElement}
Related
I need to generate Poisson arrival of traffic and thus need to set the start times of applications in clients accordingly. For this I need two things:
1. access parameters of different modules and use them as input for defining a parameter of another module
2. use a for loop to define parameters of modules
For e.g. - the example below demonstrates what I am trying to do.
I have 100 clients and each client has 20 applications. I want to set the start time of the first application of the first client and want to write the rest using a loop.
// iat = interArrivalTime
**.cli[0].app[0].startTime = 1 // define this
**.cli[0].app[1].startTime = <**.cli[0].app[0].startTime> + exponential(<iat>)
**.cli[0].app[2].startTime = <**.cli[0].app[1].startTime> + exponential(<iat>)
.
.
.
**.cli[n].app[m].startTime = <**.cli[n].app[m-1].startTime> + exponential(<iat>)
I looked at the 'ned' functions but could not find any solution.
Of course I can write a script for hardcoding the start times of several clients, but the script would output a huge file which is very hard to manage if the number of clients and applications are too big.
Thank You!
INI files are basically pattern matchers. Each time a module is initialized, the left side of the = sign on each line in the INI file is matched against the actual module path, beginning from the start of the INI file. On the first match from the beginning, the right side of the line is used as the value of the parameter.
In short, these are not assignment operations, rather rules telling each module how to initialize their own parameters. For example it is undefined, in what order these lines will be used during the initialization. Something that is earlier in the INI file is nit necessarily used earlier during module initialization. Of course this prevents you referring an other module's parameter. In fact you may not use any other parameters at all.
In short, INI files are declarative, not procedural constructs so cross references, loops and other procedural constructs cannot be used here.
If you want to create dependencies between module parameters, you can code that in the initialize() method of your module, by explicitly initializing a parameter from the C++ code. You can access any other module's parameter using C++ APIs.
Of course, if you don't want to modify existing applications this is not an optimal solution however you can create a separate module that is responsible for your 'procedural' initialization and that separate module can run through all of you applications and set the required parameters as needed. This approach is used at several places in INET where the initialization data must be computed. One notable example is the calculation of routing table information. e.g. Ipv4FlatNetworkConfigurator
An other approach would be to set up and configure your simulation from a scripting language like python. This is not (yet) supported by OMNeT++ however.
Long story short, write a configurator module and do your initialization there.
I am experimenting with a blue/green deployment setup for lambdas using terraform and lambda aliases.
I am trying to automatically retrieve the previously deployed version of the lambda by using the aws_lambda_function data source and using the value inside the routing_config => additional_version_weights. This would allow me to set up a traffic split between the previously deployed version and the version that has just been deployed.
However, I have run into 2 errors I don't quite understand.
The first error is when I try and use the data source in conjunction with a regular variable. In this case terraform complains about being unable to parse the value.
If I hard code the value terraform will attempt to run the update, however, it will fail as it tries to set the version in the routing configuration to an empty value which causes a validation error. If I instead output the value I can see that the correct version is retrieved.
Example code and steps to reproduce can be found on link below.
https://github.com/jaknor/terraform-lambda-data-source-issue
Is anyone able to explain why this isn't working?
Please note, while I appreciate that there are other ways of achieving my goal, at the moment I am only interested in understanding these particular errors.
In Terraform v0.11 and prior, interpolation sequences are not supported on the left side of an = symbol introducing an argument or object key.
To generate a map with dynamic keys, you must instead use the map function:
additional_version_weights = "${map(data.aws_lambda_function.existing_lambda_func.version, var.lambda_previous_version_percentage)}"
In Terraform v0.12 (which is in beta as I write this) the parser is now able to distinguish between arguments (which must be constants in the configuration) and map keys (which can be arbitrary expressions) and so the following syntax is preferable, although the above will still work for backward compatibility.
additional_version_weights = {
(data.aws_lambda_function.existing_lambda_func.version) = var.lambda_previous_version_percentage
}
The additional parentheses around the key expression are important to tell Terraform that this should be understood as a normal expression rather than as a literal name.
I've got a config .cfg file that has the hostname hard coded in it. I'm trying to find a way for the hostname to be gotten locally (dynamically) by running a command similar to hostname -f to have it configure the variable in the .cfg, without running a script, like python, to write the config file ahead time. Is it possible to run a 'yum' command that gets the hostname to use in the YAML/yml file?
Thanks to Wikipedia, I think I found out why no one is helping me with this:
Wiki YAML -> Security
Security
YAML is purely a data representation language and thus has no executable commands. While validation and safe parsing is inherently possible in any data language, implementation is such a notorious pitfall that YAML's lack of an associated command language may be a relative security benefit.
However, YAML allows language-specific tags so that arbitrary local objects can be created by a parser that supports those tags. Any YAML parser that allows sophisticated object instantiation to be executed opens the potential for an injection attack. Perl parsers that allow loading of objects of arbitrary class create so-called "blessed" values. Using these values may trigger unexpected behavior, e.g. if the class uses overloaded operators. This may lead to execution of arbitrary Perl code.
The situation is similar for Python or Ruby parsers. According to the PyYAML documentation
We dynamically compute the name of a directory at run-time from some attributes:
var1 = (node['foo']['bar']).to_s
var2 = (node['foo']['baz']).to_s
app_dir = "/var/#{var1}/#{var2}
Copying this code block to all of the recipes that needs it works. When we have tried to clean this up, it bombs with "No resource, or local variable named 'app_dir'.
We have tried the following:
1) Move the block of code into attributes/default.rb
2) Move the block of code into recipes/default.rb
3) Same as 2 above, but adding require_relative 'default' in the recipes that require the variable
If you're requiring from a separate file (e.g. require_relative) you need to make the variable something that gets exported. You can try making it a constant instead e.g. AppDir =, or a method. Local variables are not loaded with require.
There isn't really a good way to do this in Chef. Recipe and attributes files run in very different contexts so your best bet is either copy-pasta the boiler plate (if it really is as small as your example, do that) or otherwise maybe a global helper method but those are hard to write safely. Ping me on Slack and we can talk about the right kind of helper method for your exact situation (covering all of them isn't really practical for an SO answer).
If you need to share some code between a bunch of different recipes, you can try using a library loaded from a common cookbook dependency.
So, I'm building a custom backend for GCC for a processor. This processor has 4 address spaces: local, global, mmm, and mmr. I want to make it such that when writing c code, you can do this:
int global x = 5;
which would cause the compiler to spit out an instruction like this:
ldi.g %reg, 5
I know that certain processors like blackfin and MeP do something similar to this, so I figure its possible to do, however I have no idea how to do it. The technique that should allow me to do this is a variable attribute.
Any suggestions on how I could go about doing this?
You can add target-specific attributes by registering a struct attribute_spec table using TARGET_ATTRIBUTE_TABLE, as described in the GCC internals documentation. The details of struct attribute_spec can be found in the source (gcc/tree.h).
This handler doesn't need to do anything beyond returning NULL_TREE, although typically it will at least do some error checking. (Read the comments in gcc/tree.h, and look at examples in other targets.)
Later, you can obtain the list of attributes for a declaration tree node with DECL_ATTRIBUTES() (see the internals docs again), and use lookup_attribute() (see gcc/tree.h again) to see if a given attribute in the list.
You want to references to a symbol to generate different assembly based on your new attributes, so you probably want to use the TARGET_ENCODE_SECTION_INFO hook ("Define this hook if references to a symbol or a constant must be treated differently depending on something about the variable or function named by the symbol") to set a flag on the symbol_ref (as the docs suggest). You can define a predicate for testing this flag in the .md .