How to print out tracing message in Substrate runtime development - substrate

When working on Parity Substrate runtime development, how can I print out debug message for tracing and inspecting my variables?

Both of the above answers are correct in their own sense/time. Here's a more accurate overview:
runtime_io::print("..."); has been moved. You can now use the same function from sp-runtime::print(). These will be visible in a log target named runtime. So you'd have to do RUST_LOG=runtime=debug. You are still calling into sp_io under the hood though. Also, note that frame_support is re-exporting this for you. Most pallets need frame_support anyhow and this maeks the usage easier.
If you want to compile for wasm and native, and want prints only for native execution, use sp_std::if_std!{} macro.
Finally, you can use frame_support::debug module. This module provides wrappers around the above two to make the usage easier and more rust-like. Similar to a normal logger, you can use debug::native::warn!(...) etc.
A final useful tip is to: when possible, you can just bloat your code with println! and do SKIP_WASM_BUILD=1 cargo run [xxx]. This is helpful when you are developing and want quick debug prints without any of the setup explained above.

You can also use the if_std! macro included with sp-std:
https://github.com/paritytech/substrate/pull/2979
if_std! is a feature gate that should only be run when std feature is enabled.
Example
sp_std::if_std! {
// This code is only being compiled and executed when the `std` feature is enabled.
println!("Hello native world");
}
This is better because you can println variables and stuff rather than simply printing a string.

As a newcomer to Substrate development, the most direct way I found is with runtime_io::print().
Example:
use runtime_io::{ self };
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
fn deposit_event<T>() = default;
pub fn my_func(origin) -> Result {
runtime_io::print("Hello World");
Ok(());
}
}
}
The message will then appear in the console. Pay quick attention to it as it is constantly scrolling.
For a complete example, refer to the TCR tutorial example in github.

you can use the log crate, add it to your cargo.toml and use it like this:
log::info!("hello {}",substrate);
source : https://docs.substrate.io/test/debug/

Related

Wrap GoRoutine Creation to also contain pprof label

In my code there are various types of functions with a lot of user defined parameters and return types. Most of them run in parallel as go routines. In order to be able to efficiently debug them, I want to add labels.
I went through this link on using profile labels, find go routines during debugging.
However, if I do it the way they've mentioned, ie. call the desired function inside pprof.Do, then this would mean that I would have to add this redundant piece of code everywhere.
Old code:
go executeMeAsGoRoutine(arg1 sometype, arg2 sometype)
New Code based on the links above:
go func(args1 type1, args2 type2, args3 type3, args4, type4) {
labels := pprof.Labels("some label", "some more label", "args1",strconv.Itoa(int(100)))
pprof.Do(context.Background(), labels, func(_ context.Context) {
//function that was to be called.
executeMeAsGoRoutine(arg1, arg2, arg3, arg4)// Use args1 or args2
})
}(page, i)
However most of my functions have different arguments and custom return types. If I were to enhance each go routine I will have to repeat the above code everywhere. Wherever, I am calling go runThisFunctionInGoRoutine(), I'll also have to add outer anonymous go routine, followed by pprof.Labels, followed by pprof.Do everywhere. This will lead to a lot of unnecessary code repetiton. There are already so many places where there are different go routines created.
I was hoping if I could make a generic function that would WRAP the go routine creation call as well as label creation and then does pprof.Do for me.
Can someone suggest how to do that? The source of problem is that the functions that should run as go routines are different and have different arguments.
I envision some api like:
makeGoRoutineWithLabel(labels string, functionToRun, argumentsToSupply).
NOTE: functionToRun is different each time with argumentsToSupply being different as well.
Can anyone suggest how to do this without repeating the code? A generic go routine creation wrapper which also provides a provision to add labels along.
Instead of go createGoRoutine, its something like createAGoRoutineWithLabels.

Using Windows COM Automation with Rust

We're using a Brother label printer (QL-series) via its programming interface/SDK called 'bPac'. The tool, of which the label printing facility is part, is currently being rewritten from Ruby to Rust. During this process, I got stuck on the Win32/COM/OLE thing in Rust. A minimal working example in Ruby would be simple enough:
doc = WIN32OLE.new "bpac.Document"
doc.open 'some_label.lbx'
doc.SetPrinter "Brother QL-810W", true
print_opts = 0
doc.StartPrint("", print_opts)
doc.PrintOut(1, print_opts)
doc.EndPrint
I'd like to have a similar simple working example in Rust to start off with. As I'm not familiar with the Windows API the windows-rs crate is quite overwhelming. I figured, that I probably need the System::Com part from it. Here's what I started off with:
use windows::Win32::System::{Com, Ole};
use ::windows::core::Result;
pub fn print() {
unsafe { Com::CoInitializeEx(std::ptr::null(), Com::COINIT_APARTMENTTHREADED) }.unwrap();
let clsid = unsafe { Com::CLSIDFromProgID("bpac.Document") };
println!("We've got a CLSID: {:?}", clsid);
let obj: Result<Com::IDispatch> = unsafe { Com::CoCreateInstance(&clsid.unwrap(), None, Com::CLSCTX_ALL) };
println!("IDispatch: {:?}", obj);
}
This way I can acquire an IDispatch object, which I should be able to query for available methods and properties. Having trouble calling into this low-level (very close to the C-metal) API. I found win-idispatch crate, but that does not seem to play ball with windows-rs... :-/
I wanted to do something similar: COM Office/Excel Automation using Rust.
In a nutshell I've built a wrapper around IDispatch::GetIDsOfNames() and IDispatch::Invoke() and for arguments one would use VARIANT.
The following resources helped me build a solution:
https://stuncloud.wordpress.com/2021/08/17/rust_com/
https://qiita.com/benki/items/42099c58e07b16293609
https://learn.microsoft.com/en-us/previous-versions/office/troubleshoot/office-developer/automate-excel-from-c

Accessing a Variable at compile time

The will trait gives compile-time access to the Variable on which it's called. Are there other ways to access the Variables that will be installed in a given lexical scope? (I know I can access the Scalars at runtime, but I'm trying to get at the Variables).
In particular, I would like to be able to do something like the following (which doesn't work):
multi trait_mod:<is>(Sub \fn, :$foo) {
for fn.lexical_variables { #`(do stuff) }
}
Is there any way to do so?
Not at present, however it should be possible in a future Raku language version. Work is taking place to define a standard AST for the Raku language (currently known as "RakuAST"), and to rewrite the compiler frontend to work in terms of it. Once that is done, it will be exposed in a number of places. Macros are the most obvious consumers, however it's also planned:
To make the AST of a block or routine available from traits, in order that traits can inspect and perhaps even modify the AST
To introduce custom compiler passes, which will be modules that are given access to the entire AST of the scope that they are imported into
The first of these would seem to satisfy your use case. Going on current proposed API it could look something like this:
multi trait_mod:<is>(Sub $fn, :$foo!) {
for $fn.ast.ast-lexical-declarations {
say "Name: " ~ .lexical-name;
when RakuAST::VarDeclaration::Simple { #`( my $x / state $x / ... ) }
when RakuAST::VarDeclaration::Term { #`( my \x = ... ) }
# Others, depending if you care about parameters, placeholder params, implicits, etc.
}
}

Elm find unused functions

Let us for example have app like this:
port module MyApp exposing (main)
import Html.App as App
main =
App.programWithFlags
{ init = init
, view = view
, update = update
, subscriptions = subscriptions
}
Could we safely assume that only useful functions are the ones that are ports and anything that is called from init, view, update or subscriptions?
Because after some refactoring I stopped calling some function. Is any compiler flag or linter that could notify me that function could be safely removed?
You can only detect unused module imports while running elm-make with --warn flag.
That's all you could get from the compiler today.
Just remove stuff and compiler will tell you, if you have to put it back, I guess.

Is there a way to prevent Visual Studio from breaking on exceptions in a specific method?

I know I can control the way Visual Studio handles exceptions according to their type and to the fact that they're eventually caught using the "Exception" dialog.
However, I've got a library that's internally throwing (and catching) an ArgumentOutOfRange exception when I'm calling a specific method. The exception is thrown (and caught by the library) maybe 1% of the time, but I'm calling this method a lot. The editor says it's by design (and indeed, the design they've chosen makes sense).
The thing is that I don't want Visual Studio to break each time the exception is thrown.
I don't want to stop breaking on ArgumentOutOfRange exceptions, as I may have some in my code and want to break on those.
I don't want to enable "just my code" debugging because I'm concerned about the exceptions thrown outside of my code (notably for performance reasons)
Is there a way to achieve this? I've been looking into attributes (such as DebuggerStepThrough), but haven't find something adequate yet.
Any hints on how to do this ?
I don't want to enable "just my code" debugging
Yeah, stop there right now. That is exactly the feature you need to not get the unwanted debugger breaks. If you don't want to know about somebody else's crappy code then flip that checkbox back on.
This invariably goes off the rails when programmers use exceptions for flow control. A very common crime. It takes two of them to turn that into a mess that turns a debugging session into a very tedious click nightmare. When you need the debugger feature that breaks on the first-chance exception then you basically lost if somebody else needed that as well.
Everybody hopes that they can magically use the [DebuggerNonUserCode] or [DebuggerHidden] or [DebuggerStepThrough] attributes to make that problem disappear. It doesn't. The other programmer did not think his code was unimportant enough to deserve those attributes. And, well, it wasn't because there's always a bug hidden in code that uses try/catch-em-all code. Pokémon code.
So Microsoft had to find another way to help programmers deal with crappy library code. They did. Tick that checkbox, bam, solved. Nothing you can do about that crappy code anyway, other than sending a nasty-gram to the author. Don't let us or Microsoft slow you down doing that as well, y'all have to get along to create a product that people like to use.
I think it's not possible in visual studio but it certainly is in WinDbg.
See for example http://blogs.msdn.com/b/alejacma/archive/2009/08/24/managed-debugging-with-windbg-breaking-on-an-exception-part-1.aspx
On a side note it seems that starting with visual studio 2010 you can load and use WinDbg extension DLLs directly providing aditional functionality (including possibly the one that you need) but I haven't tried this yet - see for example http://www.dotnetcurry.com/ShowArticle.aspx?ID=648
What you can do is use Concord, the debug engine that ships with Visual Studio (starting with version 2012). It's quite extensible through a nice managed API (and deployable using vsix technology), but it's not fully documented.
Concord has the concept of debug monitors, that we can hook using the IDkmDebugMonitorExceptionNotification Interface
The cool thing is this interface can monitor all exceptions thrown. It can also "suppress" any detected exception event, which is exactly what we need.
What I suggest is to start with the Hello World sample: . Download it, and make sure it runs as expected for you.
Now, just modify HelloWorld.vsdconfigxml like this:
<!--TODO: If you copy the sample, ensure to regenerate the GUID in this file -->
<!-- 1. change component level to something higher than 40500 -->
<ManagedComponent
ComponentId="51736b11-9fb4-4b6d-8aca-a10a2b7ae768"
ComponentLevel="40501"
AssemblyName="HelloWorld">
<!-- 2. change class full name to HelloWorld.ExceptionHandler, for example -->
<Class Name="HelloWorld.ExceptionHandler">
<Implements>
<InterfaceGroup>
<NoFilter/>
<!-- 3. change supported interface -->
<Interface Name="IDkmDebugMonitorExceptionNotification"/>
</InterfaceGroup>
</Implements>
</Class>
</ManagedComponent>
Then, just create an ExceptionHandler.cs class and put something like this in there:
public class ExceptionHandler : IDkmDebugMonitorExceptionNotification
{
private bool _unhandledDetected;
// we're being called!
public void OnDebugMonitorException(DkmExceptionInformation exception, DkmWorkList workList, DkmEventDescriptorS eventDescriptor)
{
if (_unhandledDetected)
{
// this will cause the program to terminate
eventDescriptor.Suppress();
return;
}
if (exception.ProcessingStage.HasFlag(DkmExceptionProcessingStage.Unhandled))
{
_unhandledDetected = true;
}
else if (exception.ProcessingStage.HasFlag(DkmExceptionProcessingStage.Thrown))
{
if (SuppressException(exception))
{
eventDescriptor.Suppress();
}
}
}
// should we suppress a thrown (1st chance) exception?
private bool SuppressException(DkmExceptionInformation exception)
{
// implement any custom logic in here, for example use the exception's name
if (exception.Name == typeof(ArgumentOutOfRangeException).FullName)
{
// for example, use the module (assembly) name
var clrAddress = (DkmClrInstructionAddress)exception.InstructionAddress;
var clrModule = clrAddress.ModuleInstance;
if (clrModule.Name == "TheUglyOne.dll")
return true; // we don't want this one!
}
return false;
}
}
When you run the project, you should see all exceptions being monitored (regardless of your 'just my code' and/or exception triggers settings), so what you just need to do is implement some logic to suppress the ones you really don't want to see. I've not checked but I suppose you could build your logic using custom attributes as the Dkm classes provide quite a lot of .NET metadata information.
Note: as you can see, there is some trickery to make sure the program will terminate normally.

Resources