I have a question about the functionality of argparse. I use argparse for custom functions and it's great, but sometimes I'd like to move the use of argparse and supplemental code into a separate function and use it there to reduce boilerplate / visual noise.
This is a partial example of what I'd like to do:
function A
set --local options ... # some definition.
argparse_wrapper --name A $options -- $argv; or return 1
end
instead of
function A
set --local options ... # some definition.
argparse --name A $options -- $argv; or return 1
# Code validating flags set by argparse in some way that argparse is unable to do,
# i.e. validation that requires values from two flags (so f/flag!script would not
# work).
#
# Or, changing flag names to names more appropriate inside the function.
#
# Other boilerplate related to options, but
# unrelated to the purpose of the function.
#
end
But, I'm unable to set values inside of a function and transfer those values seamlessly to the caller. As in, argparse sets values in the outer scope (the function calling argparse), but I'm unable to do the same with a custom argparse wrapper of my own. At least, I'm unsure of how to do so if there is a clean way. In particular, argparse can set local variables in its outer scope, and I want to keep that functionality in the supposed argparse wrapper. Is that possible?
I'm the person who designed and implemented argparse. The approach I recommend is the one you'll find in the share/functions/fish_opt.fish module. Execute the argparse in the function that implements the command. Define a helper function with the --no-scope-shadowing flag to give it direct access to the vars in the parent function. Then call that function to validate the args (or do whatever is needed) after argparse returns.
Related
We have two versions of Active perl 5.6 and 5.24. We have web services which has to be executed on Active perl '5.24' versions(to adopt TLS 1.2 version) and this needs to be invoked from Active perl '5.6' version. We are using windows operating system.
Steps followed :
Caller code which is executed in 5.6 version invokes the 5.24 version using system /require command.
Problem:
How to call the 5.24 perl function(example: webservicecall(arg1){return "xyz") from 5.6 perl script through system command, require or etc..?
Also how to get the return value of perl function 5.24?
Note:
Its a temporary work around to have two perl versions and the we have a plan to do upgrade it for higher version.
Here perl version 5.6 installed in "C:\Perl\bin\perl\" and perl version 5.24 installed in "D:\Perl\bin\perl\".
"**p5_6.pl**"
print "Hello Perl5_6\n";
system('D:\Perl\bin\perl D:\sample_program\p5.24.pl');
print $OUTFILE;
$retval = Mul(25, 10);
print ("Return value is $retval\n" );
"**p5_24.pl**"
print "Hello Perl5_24\n";
our $OUTFILE = "Hello test";
sub Mul($$)
{
my($a, $b ) = #_;
my $c = $a * $b;
return($c);
}
I have written sample program for detail information to call perl 5.24 version from perl script 5.6 version. During execution I didn't get the expected output. How to get the "return $c" value & the "our $OUTFILE" value of p5_24.pl in p5_6.pl script?
Note: The above is the sample program based on this I will modify the actual program using serialized data.
Place the code for the function that needs v5.24 in a wrapper script, written just so that it runs that function (and prints its result). Actually, I'd recommend writing a module with that function and then loading that module in the wrapper script.
Then run that script under the wanted (5.24) interpreter, by invoking it via its full path. (You may need to be careful to make sure that all libraries and environment are right.) Do this in a way that allows you to pick up its output. That can be anything from backticks (qx) to pipe-open or, better, to good modules. There is a range of modules for this, like IPC::System::Simple, Capture::Tiny, IPC::Run3, or IPC::Run. Which to use would depend on how much you need out of that call.
You can't call a function in a running program but to have it somehow run under another program.
Also, variables (like $OUTFILE) defined in one program cannot be seen in another one. You can print them from the v5.24 program, along with that function result, and then parse that whole output in the v5.6 program. Then the two programs would need a little "protocol" -- to either obey an order in which things are printed, or to have prints labeled in some way.
Much better, write a module with functions and variables that need be shared. Then the v5.24 program can load the module and import the function it needs and run it, while the v5.6 program can load the same module but only to pick up that variable (and also run the v5.24 program).
Here is a sketch of all this. The package file SharedBetweenPerls.pm
package SharedBetweenPerls;
use warnings;
use strict;
use Exporter qw(import);
our #EXPORT_OK = qw(Mul export_vars);
my $OUTFILE = 'test_filename';
sub Mul { return $_[0] * $_[1] }
sub export_vars { return $OUTFILE }
1;
and then the v5.24 program (used below as program_for_5.24.pl) can do
use warnings;
use strict;
# Require this to be run by at least v5.24.0
use v5.24;
# Add path to where the module is, relative to where this script is
# In our demo it's the script's directory ($RealBin)
use FindBin qw($RealBin);
use lib $RealBin;
use SharedBetweenPerls qw(Mul);
my ($v1, $v2) = #ARGV;
print Mul($v1, $v2);
while the v5.6 program can do
use warnings;
use strict;
use feature 'say';
use FindBin qw($RealBin);
use lib $RealBin;
use SharedBetweenPerls qw(export_vars);
my $outfile = export_vars(); #--> 'test_filename'
# Replace "path-to-perl..." with an actual path to a perl
my $from_5.24 = qx(path-to-perl-5.24 program_for_5.24.pl 25 10); #--> 250
say "Got variable: $outfile, and return from function: $from_5.24";
where $outfile has the string test_filename while $from_5.24 variable is 250.†
This is tested to work as it stands if both programs, and the module, are in the same directory, with names as in this example. (And with path-to-perl-5.24 replaced with the actual path to your v5.24 executable.) If they are at different places you need to adjust paths, probably the package name and the use lib line. See lib pragma.
Please note that there are better ways to run an external program --- see the recommended modules above. All this is a crude demo since many details depend on what exactly you do.
Finally, the programs can also connect via a socket and exchange all they need but that is a bit more complex and may not be needed.
† The question's been edited, and we now have D:\Perl\bin\perl for path-to-perl-5.24 and D:\sample_program\p5.24.pl for program_for_5.24.
Note that with such a location of the p5.24.pl program you'd have to come up with a suitable location for the module and then its name would need to have (a part of) that path in it and to be loaded with such name. See for example this post.
A crude demo without a module (originally posted)
As a very crude sketch, in your program that runs under v5.6 you could do
my $from_5.24 = qx(path-to-perl-5.24 program_for_5.24.pl 25 10);
where the program_for_5.24.pl then could be something like
use warnings;
use strict;
sub Mul { return $_[0] * $_[1] }
my ($v1, $v2) = #ARGV;
print Mul($v1, $v2);
and the variable $from_5.24 ends up being 250 in my test.
You cannot directly call a Perl function running with another Perl version. You would need to create a program which explicitly invokes the function. The input and output need to be explicitly serialized in order to be transported between these two programs.
Serializing could be done with Data::Dumper, Storable or similar. If lower performance is needed you could invoke the program which provides the function with system and share the serialized data with temporary files or pipes. Or you could create some client-server architecture and share the serialized data with sockets. The latter is faster since it skips the repeated start and teardown of the other process but instead keeps it running.
I am trying to suppress Pylint warnings from Squish, but not have same code written in front of the code like is described here: https://kb.froglogic.com/display/KB/Example+-+Using+PyLint+with+Squish+test+scripts+that+use+source%28%29
I would like to know if is a file that I can configure and uploaded into Squish
The article describes the only option, to define the Squish functions and symbols yourself.
However, it is showing what to do in a single file Squish test script file only for sake of simplicity.
You should of course put those Squish function definitions in a separate, re-usable file, and use import to "load" the definitions into your test.py file:
from squish_definitions import *
def main():
...
in squish_definitions.py:
# Trick Pylint and Python IDEs into accepting the
# definitions in this block, whereas upon execution
# none of these definitions will take place:
if -0:
class ApplicationContext:
pass
def startApplication(aut_path_or_name, optional_squishserver_host, optional_squishserver_port):
return ApplicationContext
# etc.
Also, you should generally switch over to using Python's import in favor of Squish's source() function.
I have had no success in locating a good precommit hook I can use to validate that a Jinja2 formatted file is well-formed without attempting to substitute variables. The goal is something that will return a shell code of zero if the file is well-formed without regard to whether variable are available, 1 otherwise.
You can do this within Jinja itself, you'd just need to write a script to read and parse the template.
Since you only care about well-formed templates, and not whether or not the variables are available, it should be fairly easy to do:
#!/usr/bin/env python
# filename: check_my_jinja.py
import sys
from jinja2 import Environment
env = Environment()
with open(sys.argv[1]) as template:
env.parse(template.read())
or something that iterates over all templates
#!/usr/bin/env python
# filename: check_my_jinja_recursive.py
import sys
import os
from jinja2 import Environment, FileSystemLoader
env = Environment(loader=FileSystemLoader('./mytemplates'))
templates = [x for x in env.list_templates() if x.endswith('.jinja2')]
for template in templates:
t = env.get_template(template)
env.parse(t)
If you have incorrect syntax, you will get a TemplateSyntaxError
So your precommit hook might look like
python check_my_jinja.py template.jinja2
python check_my_jinja_recursive.py /dir/templates_folder
In the Julia NMF package a verbose option provides information on convergence using the #printf macro.
How can I access this output without rewriting the NMF package io?
To rephrase, having a function f() containing the macro #printf, how can I access the output outside f()?
This does seem like useful functionality to have: I would suggest that you file an issue with the package.
However, as a quick hack, something like the following should work:
oldout = STDOUT
(rd,wr) = redirect_stdout()
start_reading(rd)
# call your function here
flush_cstdio()
redirect_stdout(oldout)
close(wr)
s = readall(rd)
close(rd)
s
The context: There is a map somewhere on the system with bin files which I'd like to call. They are not callable directly though, but through shell scripts which do all kinds of magic and then call the corresponding bin with: "$ENV_VAR/path/to/the/bin" "$#" (the software is non-free, that's probably why this construction is used)
The problem: Calling this from within Python. I tried to use:
from subprocess import call
call(["nameOfBin", "-input somefile"])
But this gave the error ERROR: nameOfBin - Illegal option: input somefile. This means the '-' sign in front of 'input' has disapeared along the way (putting more '-' signs in front doesn't help).
Possible solutions:
1: In some way preserving the '-' sign so the bin at the end actually takes '-input' as an option instead of 'input'.
2: Fix the magic in a dirty way (I will probably manage), and have a way to call a bin at a location defined by a $ENV_VAR (environment variable).
I searched for both methods, but appearantly nobody before me had such a problem (or I didn't see it: Sorry if that's the case).
Each item in the list should be a single argument. Replace "-input somefile" with "-input", "somefile":
from subprocess import call
rc = call(["nameOfBin", "-input", "somefile"])