You can save code to a file and compile it with raku -c. Is there any way to do the same programatically, using some public API?
There is an undocumented named parameter :check to EVAL that will prevent the compiled code from actually running. It will return Nil on success. If there are any errors, it will throw like it otherwise also would.
$ raku -e 'q/say "foo"/.EVAL'
foo
$ raku -e 'q/zay "foo"/.EVAL'
===SORRY!=== Error while compiling ...
Undeclared routine:
zay used at line 1. Did you mean 'say'?
$ raku -e 'say q/say "foo"/.EVAL(:check)'
Nil
Perhaps that named parameter should be documented.
EDIT: Please note that this will not prevent BEGIN blocks from being run, as #ugexe pointed out.
Related
I'd like to do the following in raku on windows
raku -n -e ".say if /mydatabegin/;" *.file
Failed to open file C:\..\*.file: Invalid argument
The glob isn't interpreted as a glob. I assume that's because windows requires your programs to do the globbing yourself? So is there a pre-processing directive or function or even a switch I might have missed or redirect or something that allows the glob to be expanded while keeping the simplicity of the -n (or -p) and -e switches?
Obviously, I can change it to a full program by removing the -n (or -p), just using -e to specify a main, and loop on the glob results. But I really like -n.
PS. I'm literally just learning raku, and was surprised this didn't work out of the box. So examples of full programs with easy syntax work also. But I really like -n..
Edit: re #chenyf
raku -e ".say for $*ARGFILES" *.file
Same error. Related:
raku -e ".say for $*ARGFILES.lines" *.file
Same error.
raku -e "use IO::Glob; .say for glob('*.file')"
Worked as expected! Expanding:
raku -e "use IO::Glob; .say for glob('*.file').lines"
No such method 'lines' for invocant of type 'IO::Glob'
Getting closer - perhaps expanding on this is a good enough workaround. But returning to one line glory attempts:
raku -e "use IO::Glob; .say for glob($*ARGFILES)" test.file
Cannot resolve caller glob(IO::ArgFiles:D); none of these signatures match.
Ok - let's retreat back to the safety of strings:
raku -e "use IO::Glob; .say for glob($*ARGFILES.Str)" test.file
Yes! SO..:
raku -e "use IO::Glob; .say for glob($*ARGFILES.Str).lines" test.file
No such method 'lines' for invocant of type 'IO::Glob'
I clearly need to read more of the manual. But let's retreat a little and see if my use case works:
raku -e "use IO::Glob; .say for glob($*ARGFILES.Str)" *.file
Failed to open file C:\..\*.file: Invalid argument
The same error I started off with. Could this just be a raku on windows error?
Edit:
raku -MIO::Glob -e "my #files = (map { glob($_).dir }, #*ARGS).flat; for #files -> $file { say $_ for $file.lines }" *file *file2 *5
I have three sets of files. I can almost live with this solution - except for some reason the lines are being printed with "s
Any ideas on shortening, and getting rid of the quotes?
EDIT Working around the auto-globbing of the $*ARGFILES variable:
raku -MIO::Glob -n -e "BEGIN { #*ARGS = (map { glob($_) }, #*ARGS).flat }; .say" *.file *.file2
This has the advantage of still looking like the original one liner; it uses -n! It just had to do the globbing that seems to be a bug when $*ARGFILES is created.
raku -MIO::Glob -e "BEGIN { #*ARGS = (map { glob($_) }, #*ARGS).flat }; .say for $*ARGFILES.lines" *.file *.file2
Converting to $*ARGFILES.lines above shows that $*ARGFILES gets its values from #*ARGS dynamically.
EDIT
lastly, it turns out the glob function doesn't work with directories, at least on windows (the documentation has an example that simply doesn't work).
#Example from https://github.com/zostay/raku-IO-Glob
for glob("src/core/*.pm") -> $file { say ~$file }
#mine that doesn't work
raku -MIO::Glob -e "for glob('..\*.file') -> $file { say ~$file }"
#mine that does work.
raku -MIO::Glob -e "for glob('*.file').dir('..') -> $file { say ~$file }"
#And therefore the final modification of the script above:
raku -MIO::Glob -e "BEGIN { #*ARGS = (map { glob(.IO.basename).dir(.IO.dirname) }, #*ARGS).flat }; .say for $*ARGFILES.lines" ..\*.file
My rudimentary understanding of file-globbing is that the shell handles that--and since it seems that you're on Windows, all bets may be off. The only exception may be if you're using WSL Windows-Subsystem-for-Linux, which should give you a more Unix/Linux-like experience:
https://learn.microsoft.com/en-us/windows/wsl/about
According to the Microsoft document below Windows has two built-in shells, CMD.exe and Powershell (I believe the WSL shell above is optional):
"Command Shell Overview"
"Windows has two command shells: The Command shell and PowerShell. Each shell is a software program that provides direct communication between you and the operating system or application, providing an environment to automate IT operations."
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/windows-commands
Below is what works on a fairly old bash/MacOS system.
With the -ne (non-autoprinting) command line flag:
raku -ne 'print $_ ~ "\n";'
raku -ne 'say $_ ;'
raku -ne '.say;'
With the -pe (autoprinting) command line flag:
raku -pe 'print "";'
raku -pe ''
I literally just printed the contents of five (5) *.txt files using the commands above. The -e flag always ends your command-line flag cluster, but can be combined (see the -M flag below for an exception).
Now because you're on Windows, you may have to swap "" double-quotes with '' single-quotes, (and vice versa!). But hopefully that gets you started. [Any further problems and you may have to specify the full-path to your Rakudo executible].
If you'd like Raku to handle the directory-munging for you, there's Raku's dir() command, which can be used in conjunction with its :test adverb (parameter). Compare below when in the afore-mentioned directory with five (5) *.txt text files, (remove .elems calls to print actual text):
raku -e '.lines.elems.say for dir(test => / \.txt $ /);'
14
16
34
1
16
VERSUS:
raku -e '.lines.elems.say for $*ARGFILES;' *.txt
81
https://docs.raku.org/routine/dir
ADDENDUM: My inspection of the Raku P5-to-P6 Glossary suggests that the designers deliberately left Perl5's glob function out of Perl6/Raku, opting instead for a more-powerful built-in dir() routine. Links below:
https://docs.raku.org/language/5to6-perlfunc#index-entry-glob_-_perlfunc
https://docs.raku.org/routine/dir
Raku module IO::Glob is non-core (and..I've never tried it). But if you're working at the command line and want to load a module, you use (for example) -MIO::Glob followed by -e, -ne, or -pe, etc. (Also, no need to incorporate a use IO::Glob; line within your Raku one-liner, it's loaded via -M already).
See Raku one-liner examples with the Txt::CSV module and/or XML module at the links below:
https://unix.stackexchange.com/search?q=%5Bcsv%5D+Raku
https://unix.stackexchange.com/search?q=%5Bxml%5D+Raku
Follow additional Windows CMD.exe discussion/resolution below:
https://github.com/rakudo/rakudo/issues/4550
The general answer - Its not a bug. Windows programs have to deal with their own globbing if they want it. Making it work in the raku executable makes sense to me; it removes platform specific surprises, and makes one-liners easier.
But others didn't see it that way, and there is an easy enough solution - create your own module so that the code can remain consistent and be called relatively simply.
Here's a module for starters. There is room to add things like
a switch for making a successful match mandatory
a switch to indicate that a failed glob should stay in the #*ARGS variable
a switch to only glob after it.
a routine to apply the globbing instead of automatically doing it. This would allow you to remove the non-file switches
gather each glob into its own list (perhaps with a switch).
The module:
unit module CLGlob:ver<1.0>:auth<pureabsolute>;
use IO::Glob;
#*ARGS = map { .Str }, (map { glob(.IO.basename).dir(.IO.dirname) }, #*ARGS ).flat;
Note: the expression in the second map can be simplified when the glob function works for windows with directories.
Note: I convert each element to .Str to make #*ARGS consistent. The $*ARGFILES worked without doing that so some processing can be saved if you'll never look at the #*ARGS again.
Finally, the end result:
raku -MCLGlob -ne ".say" ..\*.file ..\*.file2
Yay.
Is it possible to have (Rakudo) Perl6 execute some code before dropping you into the REPL? Like python does with "python -i ".
For instance, I want to load up some modules and maybe read a side file and build some data structures from that side file before dropping into the REPL and letting the user do the things they need to do on the data structure, using the REPL as a user interface.
This is similar but different than Start REPL with definitions loaded from file though answers to this question might satisfy that one. The basic case is that, at the end of execution of any program, instead of exiting, the interpreter leaves the user at the REPL. Aside from providing a nifty, built-in, Perl6-based user interface for interactive programs, it also provides a good tool from which to debug code that otherwise exits with an error.
edit:
Selecting Zoffix's solution as the correct (so far) one as it is the only one that satisfies all requirements as stated. Here's hoping this capability gets added to the compiler or language spec.
You can load modules with the -M switch.
$ perl6 -MJSON::Tiny
To exit type 'exit' or '^D'
> to-json Array.new: 1,2,3.Str
[ 1, 2, "3" ]
>
If you want to run other code, currently you have to put it into a module first.
$ mkdir lib
$ echo 'our $bar = 42' > lib/foo.pm6
$ perl6 -Ilib -Mfoo
To exit type 'exit' or '^D'
> $bar
42
>
I'd like to provide an answer that Zoffix gave on IRC. It satisfies the basic requirement but is far from pretty and it uses NQP for which there is no user support nor is the NQP API ( "nqp::*" calls ) guaranteed for the future and can change without warning.
replify 「
say 'Hello to your custom REPL! Type `say $a` to print the secret variable';
my $a = "The value is {rand}";
」;
sub replify (Str:D \pre-code = '') {
use nqp;
my %adverbs; # command line args like --MFoo
my \r := REPL.new: nqp::getcomp('perl6'), %adverbs;
my \enc := %adverbs<encoding>:v.Str;
enc && enc ne 'fixed_8' && $*IN.set-encoding: enc;
my $*CTXSAVE := r;
my $*MAIN_CTX;
pre-code and r.repl-eval: pre-code, $, :outer_ctx(nqp::getattr(r, REPL, '$!save_ctx')),
|%adverbs;
$*MAIN_CTX and nqp::bindattr(r, REPL, '$!save_ctx', $*MAIN_CTX);
r.repl-loop: :interactive, |%adverbs;
}
I'm now reading Programming Ruby 1.9&2.0 by Dave Thomas, in which there is the following command line script:
$ ruby -pi.bak -e "gsub(/Perl/, 'Ruby')" *.txt
I know from this text that -p option places program code within the loop while; ...; print; end, and regular expressions match against $_ within -e scripts. And I found that gsub is identical to $_.gsub within the -e script. But how gsub got its receiver object, are there any explicit rule describing it?
There is no explicit rule describing it because it works like everywhere else in Ruby and has nothing to do with the -p flag.
gsub gets sent to the main object, because it is the default receiver here, and, like you noted, there is no explicit receiver defined.
Ruby has two different gsub methods. The one in String that you were probably thinking of, and the one in Kernel that is the answer to your question. Kernel gets included by Object, main is an instance of Object.
From the Kernel#gsub documentation:
Equivalent to $_.gsub..., except that $_ will be updated if substitution occurs. Available only when -p/-n command line option specified.
$_ is the "The last string read by gets or readline in the current scope."
I come across set -e a some time ago and I admit I love it.
Now, after some time I'm back to write some bash scripting.
My question is if there are some best practices when to use set -e and when not to use it (.e.g. in small/big scripts etc.) or should I rather use a pattern like cmd || exit 1 to track errors?
Yes, you should always use it. People make fun of Visual Basic all the time, saying it's not a real programming language, partly because of its “On Error Resume Next” statement. Yet that is the default in shell! set -e should have been the default. The potential for disaster is just too high.
In places where it's ok for a command to fail, you can use || true or its shortened form ||:, e.g.
grep Warning build.log ||:
In fact you should go a step further, and have
set -eu
set -o pipefail
at the top of every bash script.
-u makes it an error to reference a non-existent environment variable such as ${HSOTNAME}, at the cost of requiring some gymnastics with checking ${#} before you reference ${1}, ${2}, and so on.
pipefail makes things like misspeled-command | sed -e 's/^WARNING: //' raise errors.
If your script code checks for errors carefully and properly where necessary, and handles them in an appropriate manner, then you probably don't ever need or want to use set -e.
On the other hand if your script is a simple sequential list of commands to be run one after another, and if you want the script to terminate if any one of those fail, then sticking set -e at the top would be exactly what you would want to do to keep your script simple and uncluttered. A perfect example of this would be if you're creating a script to compile a set of sources and you want the compile to stop after the first file with errors is encountered.
More complex scripts can combine these methods since you can use set +e to turn its effect back off again and go back to explicit error checking.
Note that although set -e is supposed to cause the shell to exit IFF any untested command fails, it is wise to turn it off again when your code is doing its own error handling as there can easily be weird cases where a command will return a non-zero exit status that you're not expecting, and possibly even such cases that you might not catch in testing, and where sudden fatal termination of your script would leave something in a bad state. So, don't use set -e, or leave it turned on after using it briefly, unless you really know that you want it.
Note also that you can still define an error handler with trap ERR to do something on an error condition when set -e is in effect, as that will still be run before the shell exits.
You love it!?
For my self, I prefer in a wide, having in my .bashrc a line like this:
trap '/usr/games/fortune /usr/share/games/fortunes/bofh-excuses' ERR
( on debian: apt-get install fortunes-bofh-excuses :-)
But it's only my preference ;-)
More seriously
lastErr() {
local RC=$?
history 1 |
sed '
s/^ *[0-9]\+ *\(\(["'\'']\)\([^\2]*\)\2\|\([^"'\'' ]*\)\) */cmd: \"\3\4\", args: \"/;
s/$/", rc: '"$RC/"
}
trap "lastErr" ERR
Gna
bash: Gna : command not found
cmd: "Gna", args: "", rc: 127
Gna gna
cmd: "Gna", args: "gna", rc: 127
"Gna gna" foo
cmd: "Gna gna", args: "foo", rc: 127
Well, from there, you could:
trap "lastErr >>/tmp/myerrors" ERR
"Gna gna" foo
cat /tmp/myerrors
cmd: "Gna gna", args: "foo", rc: 1
Or better:
lastErr() {
local RC=$?
history 1 |
sed '
s/^ *[0-9]\+ *\(\(["'\'']\)\([^\2]*\)\2\|\([^"'\'' ]*\)\) */cmd: \"\3\4\", args: \"/;
s/$/", rc: '"$RC/
s/^/$(date +"%a %d %b %T ")/"
}
"Gna gna" foo
cat /tmp/myerrors
cmd: "Gna gna", args: "foo", rc: 1
Tue 20 Nov 18:29:18 cmd: "Gna gna", args: "foo", rc: 127
... You could even add other informations like $$, $PPID, $PWD or maybe your..
When this option is on, if a simple command fails for any of the reasons listed in Consequences of Shell Errors or returns an exit status value >0, and is not part of the compound list following a while, until, or if keyword, and is not a part of an AND or OR list, and is not a pipeline preceded by the ! reserved word, then the shell shall immediately
exit.
Using bash 4.1.5:
#!/bin/bash
set -e
foo()
{
false
echo "argh, I don't want to get there! What about set -e?!"
}
foo && echo "ok"
This yields the following output:
argh, I don't want to get there! What about set -e?!
ok
This issue occurs whenever foo is called as a condition (i.e inside if, while, &&, ||, etc.). foo behaves correctly if called as a simple command.
I find this behavior surprising and quite frankly dangerous, because this means that the behavior of a bash function changes depending on how it is called. For example, even something as simple as foo and foo && true will not yield the same results. This is very troubling! One can only imagine how much chaos this could cause if foo is doing sensitive operations...
Is there any workaround I could use to avoid this kind of situation?
Why don't you make foo() return a non-zero exit code if it fails?
foo(){
return 1
echo "argh, I don't want to get here! What about set -e?!"
}
The behavior you describe is expected, and quite necessary. Consider a function like:
word_is_in_file() {
grep $1 $2 > /dev/null
}
Now, consider a script that uses this function (sorry, this example is a bit contrived
since a real script would probably just invoke grep directly) to make a decision:
if word_is_in_file $word $file; then
do_something
else
do_something_else
fi
The definition of the function may be buried in a library of shell
functions that the author never sees. The author does not consider
the grep failure to be a failure, and would be very baffled if
the script terminated because of it.
A way to get the semantics you desire is to do something like:
foo() {
# This function will abort if errors are encountered, but
# the script will continue
sh -e -c '
false
echo not reached'
}
foo && echo not reached
echo reached
foo
echo not reached
The semantics of set -e are also set to not abort the script in
the "foo && ..." case for the same reason. It allows branching.