Julia: Having a function f() containing the macro #printf, how can I access the output outside f()? - iostream

In the Julia NMF package a verbose option provides information on convergence using the #printf macro.
How can I access this output without rewriting the NMF package io?
To rephrase, having a function f() containing the macro #printf, how can I access the output outside f()?

This does seem like useful functionality to have: I would suggest that you file an issue with the package.
However, as a quick hack, something like the following should work:
oldout = STDOUT
(rd,wr) = redirect_stdout()
start_reading(rd)
# call your function here
flush_cstdio()
redirect_stdout(oldout)
close(wr)
s = readall(rd)
close(rd)
s

Related

Call perl function from another perl script with different Active perl versions

We have two versions of Active perl 5.6 and 5.24. We have web services which has to be executed on Active perl '5.24' versions(to adopt TLS 1.2 version) and this needs to be invoked from Active perl '5.6' version. We are using windows operating system.
Steps followed :
Caller code which is executed in 5.6 version invokes the 5.24 version using system /require command.
Problem:
How to call the 5.24 perl function(example: webservicecall(arg1){return "xyz") from 5.6 perl script through system command, require or etc..?
Also how to get the return value of perl function 5.24?
Note:
Its a temporary work around to have two perl versions and the we have a plan to do upgrade it for higher version.
Here perl version 5.6 installed in "C:\Perl\bin\perl\" and perl version 5.24 installed in "D:\Perl\bin\perl\".
"**p5_6.pl**"
print "Hello Perl5_6\n";
system('D:\Perl\bin\perl D:\sample_program\p5.24.pl');
print $OUTFILE;
$retval = Mul(25, 10);
print ("Return value is $retval\n" );
"**p5_24.pl**"
print "Hello Perl5_24\n";
our $OUTFILE = "Hello test";
sub Mul($$)
{
my($a, $b ) = #_;
my $c = $a * $b;
return($c);
}
I have written sample program for detail information to call perl 5.24 version from perl script 5.6 version. During execution I didn't get the expected output. How to get the "return $c" value & the "our $OUTFILE" value of p5_24.pl in p5_6.pl script?
Note: The above is the sample program based on this I will modify the actual program using serialized data.
Place the code for the function that needs v5.24 in a wrapper script, written just so that it runs that function (and prints its result). Actually, I'd recommend writing a module with that function and then loading that module in the wrapper script.
Then run that script under the wanted (5.24) interpreter, by invoking it via its full path. (You may need to be careful to make sure that all libraries and environment are right.)   Do this in a way that allows you to pick up its output. That can be anything from backticks (qx) to pipe-open or, better, to good modules. There is a range of modules for this, like IPC::System::Simple, Capture::Tiny, IPC::Run3, or IPC::Run. Which to use would depend on how much you need out of that call.
You can't call a function in a running program but to have it somehow run under another program.
Also, variables (like $OUTFILE) defined in one program cannot be seen in another one. You can print them from the v5.24 program, along with that function result, and then parse that whole output in the v5.6 program. Then the two programs would need a little "protocol" -- to either obey an order in which things are printed, or to have prints labeled in some way.
Much better, write a module with functions and variables that need be shared. Then the v5.24 program can load the module and import the function it needs and run it, while the v5.6 program can load the same module but only to pick up that variable (and also run the v5.24 program).
Here is a sketch of all this. The package file SharedBetweenPerls.pm
package SharedBetweenPerls;
use warnings;
use strict;
use Exporter qw(import);
our #EXPORT_OK = qw(Mul export_vars);
my $OUTFILE = 'test_filename';
sub Mul { return $_[0] * $_[1] }
sub export_vars { return $OUTFILE }
1;
and then the v5.24 program (used below as program_for_5.24.pl) can do
use warnings;
use strict;
# Require this to be run by at least v5.24.0
use v5.24;
# Add path to where the module is, relative to where this script is
# In our demo it's the script's directory ($RealBin)
use FindBin qw($RealBin);
use lib $RealBin;
use SharedBetweenPerls qw(Mul);
my ($v1, $v2) = #ARGV;
print Mul($v1, $v2);
while the v5.6 program can do
use warnings;
use strict;
use feature 'say';
use FindBin qw($RealBin);
use lib $RealBin;
use SharedBetweenPerls qw(export_vars);
my $outfile = export_vars(); #--> 'test_filename'
# Replace "path-to-perl..." with an actual path to a perl
my $from_5.24 = qx(path-to-perl-5.24 program_for_5.24.pl 25 10); #--> 250
say "Got variable: $outfile, and return from function: $from_5.24";
where $outfile has the string test_filename while $from_5.24 variable is 250.†
This is tested to work as it stands if both programs, and the module, are in the same directory, with names as in this example. (And with path-to-perl-5.24 replaced with the actual path to your v5.24 executable.) If they are at different places you need to adjust paths, probably the package name and the use lib line. See lib pragma.
Please note that there are better ways to run an external program --- see the recommended modules above. All this is a crude demo since many details depend on what exactly you do.
Finally, the programs can also connect via a socket and exchange all they need but that is a bit more complex and may not be needed.
† The question's been edited, and we now have D:\Perl\bin\perl for path-to-perl-5.24 and D:\sample_program\p5.24.pl for program_for_5.24.
Note that with such a location of the p5.24.pl program you'd have to come up with a suitable location for the module and then its name would need to have (a part of) that path in it and to be loaded with such name. See for example this post.
A crude demo without a module (originally posted)
As a very crude sketch, in your program that runs under v5.6 you could do
my $from_5.24 = qx(path-to-perl-5.24 program_for_5.24.pl 25 10);
where the program_for_5.24.pl then could be something like
use warnings;
use strict;
sub Mul { return $_[0] * $_[1] }
my ($v1, $v2) = #ARGV;
print Mul($v1, $v2);
and the variable $from_5.24 ends up being 250 in my test.
You cannot directly call a Perl function running with another Perl version. You would need to create a program which explicitly invokes the function. The input and output need to be explicitly serialized in order to be transported between these two programs.
Serializing could be done with Data::Dumper, Storable or similar. If lower performance is needed you could invoke the program which provides the function with system and share the serialized data with temporary files or pipes. Or you could create some client-server architecture and share the serialized data with sockets. The latter is faster since it skips the repeated start and teardown of the other process but instead keeps it running.

move argparse use into separate function

I have a question about the functionality of argparse. I use argparse for custom functions and it's great, but sometimes I'd like to move the use of argparse and supplemental code into a separate function and use it there to reduce boilerplate / visual noise.
This is a partial example of what I'd like to do:
function A
set --local options ... # some definition.
argparse_wrapper --name A $options -- $argv; or return 1
end
instead of
function A
set --local options ... # some definition.
argparse --name A $options -- $argv; or return 1
# Code validating flags set by argparse in some way that argparse is unable to do,
# i.e. validation that requires values from two flags (so f/flag!script would not
# work).
#
# Or, changing flag names to names more appropriate inside the function.
#
# Other boilerplate related to options, but
# unrelated to the purpose of the function.
#
end
But, I'm unable to set values inside of a function and transfer those values seamlessly to the caller. As in, argparse sets values in the outer scope (the function calling argparse), but I'm unable to do the same with a custom argparse wrapper of my own. At least, I'm unsure of how to do so if there is a clean way. In particular, argparse can set local variables in its outer scope, and I want to keep that functionality in the supposed argparse wrapper. Is that possible?
I'm the person who designed and implemented argparse. The approach I recommend is the one you'll find in the share/functions/fish_opt.fish module. Execute the argparse in the function that implements the command. Define a helper function with the --no-scope-shadowing flag to give it direct access to the vars in the parent function. Then call that function to validate the args (or do whatever is needed) after argparse returns.

Is There a tutorial how to suppress Pylint warnings for Squish?

I am trying to suppress Pylint warnings from Squish, but not have same code written in front of the code like is described here: https://kb.froglogic.com/display/KB/Example+-+Using+PyLint+with+Squish+test+scripts+that+use+source%28%29
I would like to know if is a file that I can configure and uploaded into Squish
The article describes the only option, to define the Squish functions and symbols yourself.
However, it is showing what to do in a single file Squish test script file only for sake of simplicity.
You should of course put those Squish function definitions in a separate, re-usable file, and use import to "load" the definitions into your test.py file:
from squish_definitions import *
def main():
...
in squish_definitions.py:
# Trick Pylint and Python IDEs into accepting the
# definitions in this block, whereas upon execution
# none of these definitions will take place:
if -0:
class ApplicationContext:
pass
def startApplication(aut_path_or_name, optional_squishserver_host, optional_squishserver_port):
return ApplicationContext
# etc.
Also, you should generally switch over to using Python's import in favor of Squish's source() function.

Find dead code in Golang monorepo

My team has all our Golang code in a monorepo.
Various package subdirectories with library code.
Binaries/services/tools under cmd
We've had it for a while and are doing some cleanup. Are there any tools or techniques that can find functions not used by the binaries under cmd?
I know go vet can find private functions that are unused in a package. However I suspect we also have exported library functions that aren't used either.
UPD 2020: The unused tool has been incorporated into staticcheck.
Unfortunately, v0.0.1-2020.1.4 will probably be the last to support this
feature. Dominik explains that it is because the check consumes a lot of
resources and is hard to get right.
To get that version:
env GO111MODULE=on go get honnef.co/go/tools/cmd/staticcheck#v0.0.1-2020.1.4
To use it:
$ staticcheck --unused.whole-program=true -- ./...
./internal/pkg/a.go:5:6: type A is unused (U1001)
Original answer below.
Dominik Honnef's unused tool might be what you're looking for:
Optionally via the -exported flag, unused can analyse all arguments as a
single program and report unused exported identifiers. This can be useful for
checking "internal" packages, or large software projects that do not export
an API to the public, but use exported methods between components.
Try running go build -gcflags -live. This passes the -live flag to the compiler (go tool compile), instructing it to output debugging messages about liveness analysis. Unfortunately, it only prints when it's found live code, not dead code, but you could in theory look to see what doesn't show up in the output.
Here's an example from compiling the following program stored in dead.go:
package main
import "fmt"
func main() {
if true {
fmt.Println(true)
} else {
fmt.Println(false)
}
}
Output of go build -gcflags -live:
# _/tmp/dead
./dead.go:7: live at call to convT2E: autotmp_5
./dead.go:7: live at call to Println: autotmp_5
If I'm reading this correctly, the second line states that the implicit call to convT2E (which converts non-interface types to interface types, since fmt.Println takes arguments of type interface{}) is live, and the third line states that the call to fmt.Println is live. Note that it doesn't say that the fmt.Println(false) call is live, so we can deduce that it must be dead.
I know that's not a perfect answer, but I hope it helps.
It is a bit dirty, but it works for me.
I had a lot of structs which I did not want to test manually, so I wrote a script that renames the struct then runs all the tests (ci/test.sh) and renames it back if any test failed:
#!/bin/sh
set -e
git grep 'struct {' | grep type | while read line; do
file=$(echo $line | awk -F ':' '{print $1}')
struct=$(echo $line | awk '{print $2}')
sed "s/$struct struct/_$struct struct/g" -i $file
echo "testing for struct $struct changed in file $file"
if ! ./ci/test.sh; then
sed "s/_$struct struct/$struct struct/g" -i $file
fi
done
It's not an open source solution, but it works.
If you guys are using Goland, you should consider using its code-inspections feature, includes useful features:
Reports unused constant
Reports unused exported functions
Reports unused exported types in the main package and in tests
Reports unused unexported functions
Reports global variables that are defined but are never used in code
Reports unused function parameters
Reports unused types
(It looks like the implementation of this feature is black box, jetbrains does not open source this feature)
Go-related detection tools seem to place more emphasis on accuracy, and they want to do their best to minimize error reporting. And using Goland's code-inspections feature may require more self-judgment. :)
Interests: Paid users only, not working for Jetbrains, simply think this feature works well.
A reliable but inelegant method I've used is to rename or comment out functions you suspect might not be used and then recompile everything -- no errors means you didn't need them.
If they are needed, it shows you where these functions are called so it's good for getting familiar with a code base and seeing how things connect.

Python: Call a shell script which calls a bin. With arguments

The context: There is a map somewhere on the system with bin files which I'd like to call. They are not callable directly though, but through shell scripts which do all kinds of magic and then call the corresponding bin with: "$ENV_VAR/path/to/the/bin" "$#" (the software is non-free, that's probably why this construction is used)
The problem: Calling this from within Python. I tried to use:
from subprocess import call
call(["nameOfBin", "-input somefile"])
But this gave the error ERROR: nameOfBin - Illegal option: input somefile. This means the '-' sign in front of 'input' has disapeared along the way (putting more '-' signs in front doesn't help).
Possible solutions:
1: In some way preserving the '-' sign so the bin at the end actually takes '-input' as an option instead of 'input'.
2: Fix the magic in a dirty way (I will probably manage), and have a way to call a bin at a location defined by a $ENV_VAR (environment variable).
I searched for both methods, but appearantly nobody before me had such a problem (or I didn't see it: Sorry if that's the case).
Each item in the list should be a single argument. Replace "-input somefile" with "-input", "somefile":
from subprocess import call
rc = call(["nameOfBin", "-input", "somefile"])

Resources