Accessing a Variable at compile time - compilation

The will trait gives compile-time access to the Variable on which it's called. Are there other ways to access the Variables that will be installed in a given lexical scope? (I know I can access the Scalars at runtime, but I'm trying to get at the Variables).
In particular, I would like to be able to do something like the following (which doesn't work):
multi trait_mod:<is>(Sub \fn, :$foo) {
for fn.lexical_variables { #`(do stuff) }
}
Is there any way to do so?

Not at present, however it should be possible in a future Raku language version. Work is taking place to define a standard AST for the Raku language (currently known as "RakuAST"), and to rewrite the compiler frontend to work in terms of it. Once that is done, it will be exposed in a number of places. Macros are the most obvious consumers, however it's also planned:
To make the AST of a block or routine available from traits, in order that traits can inspect and perhaps even modify the AST
To introduce custom compiler passes, which will be modules that are given access to the entire AST of the scope that they are imported into
The first of these would seem to satisfy your use case. Going on current proposed API it could look something like this:
multi trait_mod:<is>(Sub $fn, :$foo!) {
for $fn.ast.ast-lexical-declarations {
say "Name: " ~ .lexical-name;
when RakuAST::VarDeclaration::Simple { #`( my $x / state $x / ... ) }
when RakuAST::VarDeclaration::Term { #`( my \x = ... ) }
# Others, depending if you care about parameters, placeholder params, implicits, etc.
}
}

Related

Wrap GoRoutine Creation to also contain pprof label

In my code there are various types of functions with a lot of user defined parameters and return types. Most of them run in parallel as go routines. In order to be able to efficiently debug them, I want to add labels.
I went through this link on using profile labels, find go routines during debugging.
However, if I do it the way they've mentioned, ie. call the desired function inside pprof.Do, then this would mean that I would have to add this redundant piece of code everywhere.
Old code:
go executeMeAsGoRoutine(arg1 sometype, arg2 sometype)
New Code based on the links above:
go func(args1 type1, args2 type2, args3 type3, args4, type4) {
labels := pprof.Labels("some label", "some more label", "args1",strconv.Itoa(int(100)))
pprof.Do(context.Background(), labels, func(_ context.Context) {
//function that was to be called.
executeMeAsGoRoutine(arg1, arg2, arg3, arg4)// Use args1 or args2
})
}(page, i)
However most of my functions have different arguments and custom return types. If I were to enhance each go routine I will have to repeat the above code everywhere. Wherever, I am calling go runThisFunctionInGoRoutine(), I'll also have to add outer anonymous go routine, followed by pprof.Labels, followed by pprof.Do everywhere. This will lead to a lot of unnecessary code repetiton. There are already so many places where there are different go routines created.
I was hoping if I could make a generic function that would WRAP the go routine creation call as well as label creation and then does pprof.Do for me.
Can someone suggest how to do that? The source of problem is that the functions that should run as go routines are different and have different arguments.
I envision some api like:
makeGoRoutineWithLabel(labels string, functionToRun, argumentsToSupply).
NOTE: functionToRun is different each time with argumentsToSupply being different as well.
Can anyone suggest how to do this without repeating the code? A generic go routine creation wrapper which also provides a provision to add labels along.
Instead of go createGoRoutine, its something like createAGoRoutineWithLabels.

ruby: howto use multiple boolean options and keep the code readable

I need to process a couple of boolean options, and I am trying to do it like it is usually done in C:
DICT = 0x000020000
FILTER = 0x000040000
HIGH = 0x000080000
KEEP = 0x000100000
NEXT = 0x000200000
I can now assign arbitrary options to a Integer variable, and test for them:
action if (opts & HIGH|KEEP) != 0
But this looks ugly and gets hard to read. I would prefer writing it like
action if opts.have HIGH|KEEP
This would require to load have method onto Integer class.
The question now is: where would I do that, in order to keep this method contained to the module where those options are used and the classes that include this module? I don't think it's a good idea to add it globally, as somebody might define another have somewhere.
Or, are there better approaches, for the general task or for the given use-case? Adding a separate Options class looks like overkill - or should I?
You can use anybits?:
action if opts.anybits?(HIGH|KEEP)
The methods returns true if any bits from the given mask are set in the receiver, and false otherwise.

How to print out tracing message in Substrate runtime development

When working on Parity Substrate runtime development, how can I print out debug message for tracing and inspecting my variables?
Both of the above answers are correct in their own sense/time. Here's a more accurate overview:
runtime_io::print("..."); has been moved. You can now use the same function from sp-runtime::print(). These will be visible in a log target named runtime. So you'd have to do RUST_LOG=runtime=debug. You are still calling into sp_io under the hood though. Also, note that frame_support is re-exporting this for you. Most pallets need frame_support anyhow and this maeks the usage easier.
If you want to compile for wasm and native, and want prints only for native execution, use sp_std::if_std!{} macro.
Finally, you can use frame_support::debug module. This module provides wrappers around the above two to make the usage easier and more rust-like. Similar to a normal logger, you can use debug::native::warn!(...) etc.
A final useful tip is to: when possible, you can just bloat your code with println! and do SKIP_WASM_BUILD=1 cargo run [xxx]. This is helpful when you are developing and want quick debug prints without any of the setup explained above.
You can also use the if_std! macro included with sp-std:
https://github.com/paritytech/substrate/pull/2979
if_std! is a feature gate that should only be run when std feature is enabled.
Example
sp_std::if_std! {
// This code is only being compiled and executed when the `std` feature is enabled.
println!("Hello native world");
}
This is better because you can println variables and stuff rather than simply printing a string.
As a newcomer to Substrate development, the most direct way I found is with runtime_io::print().
Example:
use runtime_io::{ self };
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
fn deposit_event<T>() = default;
pub fn my_func(origin) -> Result {
runtime_io::print("Hello World");
Ok(());
}
}
}
The message will then appear in the console. Pay quick attention to it as it is constantly scrolling.
For a complete example, refer to the TCR tutorial example in github.
you can use the log crate, add it to your cargo.toml and use it like this:
log::info!("hello {}",substrate);
source : https://docs.substrate.io/test/debug/

Puppet: Making a custom function depend on a resource

I have a Puppet custom function that returns information about a user defined in OpenStack's Keystone identity service. Usage is something along the lines of:
$tenant_id = lookup_tenant_by_name($username, $password, "mytenant")
The problem is that the credentials used in this query ($username) are supposed to be created by another resource during the Puppet run (a Keystone_user resource from puppet-keystone). As far as I can tell, the call to the lookup_tenant_by_name function is being evaluated before any resource ordering happens, because no amount of dependencies in the calling code is able to force the credentials to be created prior to this function being executed.
In general, it is possible to write custom functions -- or place them appropriately in a manifest -- such that they will not be executed by Puppet until after some specified resource has been instantiated?
Short answer: You cannot make your manifest's behavior depend on resources declared inside of it.
Long answer: Parser functions are called during the compilation phase (on the master if you use one, or the agent if you use puppet apply). In neither case can it ever run before any resource is synced, because that will happen after the compiler has done all its work (including invocation of your functions).
To query information from the agent machine, you generally want to use custom facts. Still, those will be populated before even the compiler run.
Likely the best approach in this situation is to make the manifest tolerate the absence of the information, so that anything that depends on the value that your lookup_tenant_by_name function returns will only be evaluated if that value is available. This will usually be during the second Puppet run.
if $tenant_id == "" {
notify { "cannot yet find tenant $username": }
}
else {
# your code using the tenant ID
}

Debugging a custom function in Google Apps Script

I am trying to create my first custom function for a Google Spreadsheet in Apps Script and I am having a hard time using the debugger.
I am working on the custom function demo code from the Google documentation and I have set a breakpoint in the custom function drivingDistance(origin, destination) that is used in a cell of my spreadsheet. The problem I have is, that that the debugger shows the parameters that are passed into the function as being undefined. The content of any other variables that are created during execution is displayed correctly though (as long as they do not depend on the input parameters).
Funny thing is that although the input parameters are displayed as undefined, the function's calculations succeed, so this seems to be a debugger issue. Unfortunately this problem prevents me from successfully learning to create and debug own code (as I will have to work with complex input parameters).
I have a feeling that the problem is connected to the server-side execution of Apps Script, so I tried to log the input parameters using the Logger class and I also tried to copy these variables into new local variables. But all I came up with was undefined.
Another strange hint is, that typeof of the parameters returns String. But getting the length of them throws an error and trying to concatenate them with another string returns the string "undefined" (see my screen dump).
I am looking for insights about what is going on here.
The debugger is probably not lying to you - if you launch that function in the debugger, it will have no parameters passed to it. No worries, though, you just need to make sure that you get values to use for debugging. Take a look at How can I test a trigger function in GAS?, which demonstrates techniques that can be applied for custom functions.
Instead of defining an event to pass to the function, you'll want to provide (or retrieve from your spreadsheet) values for the parameters.
function test_drivingDistance() {
// Define a set of test values
var testSet = [[ 'Washington, DC', 'Seattle, WA' ],
[ 'Ottawa, ON', 'Orlando, FL'],
[ 'Paris, France', 'Dakar, Senegal']];
// Run multiple tests
for (var test in testSet) {
Logger.log('Test ' + test + ' = ' + drivingDistance(testSet[test][0],testSet[test][1]));
}
// Get parameters from sheet
var TestFromSheet = drivingDistance(ss.getRange('A1').getValue(),ss.getRange('A2').getValue());
}
You get the idea. You can still set breakpoints inside your function, or use debugger to pause execution.
Edit - examining arguments
What arguments is the custom function receiving when called from a spreadsheet?
You're limited in what you can do to debug this, since the debugger can't be used to examine your custom function when invoked from Sheets, and security limitations on custom functions block Logging. It might be enough to get an understanding of argument passing in general. While javascript functions may have named parameters, all arguments are passed as an Array-like object, called arguments. This custom function will return an array that reports the arguments received. When called from a spreadsheet, each argument will appear in its own cell, starting at the cell you enter the function into:
function testArguments( ) {
var argArray = [];
for (var arg in arguments) {
argArray.push("arguments[" + arg + "] = " + JSON.stringify(arguments[arg]))
}
return argArray;
}
In javascript, there aren't really types like int or float - just Number. Those parameters will show up without quotes on them, and look like numbers. Dates arrive as Date objects, but when printed this way show up as Date-y strings. Strings have quotes.
A custom function never receives a range as an argument; when you provide a range parameter in the spreadsheet, its contents are collected into a one or two-dimensional array, and the array is the argument.
You can use this hack to see the structure of the arguments being sent into the custom function:
function TEST(input) {
return (JSON.stringify(input));
}
The results will show up in your sheet like this:

Resources