MAIN not executed by Factor on command line - factor-lang

I'm not seeing any output from my Hello World program.
$ cat hello.factor
USE: io
IN: hello
: hello ( -- ) "Hello World!" print ;
MAIN: hello
$ factor hello.factor
$
(No output)
$ factor -run=hello
Vocabulary does not exist
name "hello"
$ factor -run=hello hello.factor
$
(No output)

MAIN: defines an entry point for a vocabulary when the vocabulary is passed to run, not necessarily when it is "loaded" from the command line, as you're doing above. The easiest way to make this work is to simply issue "hello" run from the UI listener.
To actually call the hello word as a script, simply place a call in the top level, like so:
USE: io
IN: hello
: hello ( -- ) "Hello World!" print ;
! This is the important part
hello
Alternatively, you can load and run the vocabulary from the command line with the -run=vocab command line argument. For instance, factor -run=hello.
There is some more information on this in the docs. Try running "command-line" about in the listener.

Factor now executes a MAIN function for command line scripts that specify one. (See GitHub)

Related

compatible comments for both bash and batch

We write two similar scripts: one for bash (linux) and one for batch (dos/windows).
Even if the specific code is different we would like to visually compare (diff) both scripts and have the similar parts of code aligned side by side.
We use explicit comments with the same text to achieve this. But the beginning of the comments is different in both scripting (REM or :: in windows) and (# in linux).
Therefore there is a wrong alignment:
linux
windows
# first step
REM first step
foo.sh
foo.bat
# second step
REM second step
bar.sh
bar.bat
Is there a way to use a common character or sequence of characters to make the comments equal?
Is the use of : #; safe for both systems/scripts?
linux
windows
: #; first step
: #; first step
foo.sh
foo.bat
: #; second step
: #; second step
bar.sh
bar.bat
Are there any unwanted side effects?
: in bash is not exactly a comment. It is a void command.
A little bit like pass in some languages.
It helps, for example, to fill empty slots, if needed
if condition
then
:
else
doSomething
fi
So, you may use, somehow, as a sort of comment. That would works both in bash and batch (well, I know nothing of batch. But since you said that :: is a comment there). But beware that it is not exactly a comment. So there are some differences
For example
#!/bin/bash
echo one ||
:: foo
echo two
echo un ||
# bar
echo deux
Displays one, two and un but not deux.
Because echo one || prints one and then execute the following command only if it fails (which it doesn't). Here the following command is :: foo. Which is not executed (you wouldn't know, since it does nothing, but it is not executed). And the echo two is a brand new unrelated command that is executed.
Whereas, on the other hand, echo un || likewise prints un, and doesn't execute the next command, since echo un did not fail. But the next command here is echo deux. Because # bar doesn't count, since it is a comment.
And that is only one of the many examples one could probably find to show that : is not a comment.
But, well, if you use it being aware of that, I suppose you could use it to insert void comments in your bash code that would also be void in batch.
Edit:
I won't edit for each new example that comes to mind. But that one is pretty important
echo un # deux
echo one : two
prints
un
one : two
: is a command. So, as other commands, like ls not all occurrence of it is treated as so (no more than echo ls list the directory constant. ls is just a string here)
So, you can't use it as a replacement for inline comments. Only for full lines comments.

How to see syntax errors reported with actual line numbers in the parent script when Perl is embedded within shell script?

For no justifiable reason at all, I have a pretty substantial Perl script embedded within a Bash function that is being invoked within an autoenv .env file.
It looks something like this:
perl='
$inverse = "\e[7m";
$invoff = "\e[27m";
$bold = "\e[1m";
⋮
'
perl -e "$perl" "$inputfile"
I understand that standalone Perl scripts and the PATH variable are a thing, and I understand that Term::ANSIColor is a thing. This is not about that.
My question is, if there's a syntax error in the embedded Perl code, how can I get Perl to report the actual line number within the parent shell script?
For example, say the perl= assignment occurs on line 120 within that file, but there's a syntax error on the 65th line of actual Perl code. I get this:
syntax error at -e line 65, near "s/(#.*)$/$comment\1$endcomment/"
Execution of -e aborted due to compilation errors.
…but I want to see this (the actual line number in the parent script) instead:
syntax error at -e line 185, near "s/(#.*)$/$comment\1$endcomment/"
Things I've tried (that didn't work):
assigning to __LINE__
don't even know why I thought that would work; it's not a variable, it's a constant, and you get an error stating the same
assigning to $. ($INPUT_LINE_NUMBER with use English)
I was pretty sure this wasn't going to work anyway, because this is like NR in Awk, and this clearly isn't what this is for
As described in perlsyn, you can use the following directive to set the line number and (optionally) the file name of the subsequent line:
#line 42 "file.pl"
This means that you could use
#!/bin/sh
perl="#line 4 \"$0\""'
warn("test");
'
perl -e "$perl"
Output:
$ ./a.sh
test at ./a.sh line 4.
There's no clean way to avoid hardcoding the line number when using sh, but it is possible.
#!/bin/sh
script_start=$( perl -ne'if (/^perl=/) { print $.+1; last }' -- "$0" )
perl="#line $script_start \"$0\""'
warn("test");
'
perl -e "$perl"
On the other hand, bash provides the current line number.
#!/bin/bash
script_start=$(( LINENO + 2 ))
perl="#line $script_start \"$0\""'
warn("test");
'
perl -e "$perl"
There is this useful tidbit in the perlrun man page, under the section for -x, which "tells Perl that the program is embedded in a larger chunk of unrelated text, such as in a mail message."
All references to line numbers by the program (warnings, errors, ...) will treat the #! line as the first line. Thus a warning on the 2nd line of the program, which is on the 100th line in the file will be reported as line 2, not as line 100. This can be overridden by using the #line directive. (See Plain Old Comments (Not!) in perlsyn)
Based on the bolded statement, adding #line NNN (where NNN is the actual line number of the parent script where that directive appears) achieves the desired effect:
perl='#line 120
$inverse = "\e[7m";
$invoff = "\e[27m";
$bold = "\e[1m";
⋮
'
⋮

Prefixing console ouput of system calls from Ruby

I'd like to create a Ruby script that prefixes the console output. For example:
I want to implement an interface like this:
puts 'MainLogger: Saying hello'
prefix_output_with('MainLogger') do
system 'echo hello'
end
So this shows up in the console:
MainLogger: Saying hello
MainLogger: hello
What would be a good approach to prefix all of the syscall output to have some logger?
Note: I don't care if we echo what the system call is or not
The important point here is that there's no way to know if system will actually produce output. I'm assuming you don't want a blank MainLogger: whenever a system call doesn't print anything, so you'll need to do the prefixes in the shell:
def prefix_system_calls pre
sys = Kernel.instance_method(:system)
# redefine to add prefix
Kernel.send(:define_method, :system) do |cmd|
sys.bind(self)["#{cmd} | sed -e 's/^/#{pre}: /'"]
end
yield
# redefine to call original method
Kernel.send(:define_method, :system) do |*args|
sys.bind(self)[*args]
end
end
system "echo foo"
prefix_system_calls("prefix") do
system "echo bar"
end
system "echo baz"
# foo
# prefix: bar
# baz
This implementation is pretty fragile, though. It doesn't handle all the different ways you can call system, and prefixes containing special shell characters could cause an error.

Redirect / pipe into read command

This is a follow-up to my previous question on SO. I am still trying to command a script deepScript from within another script shallowScript and process its output before display on terminal. Here is a code sample:
deepScript.sh
#!/bin/zsh
print "Hello - this is deepScript"
read "ans?Reading : "
print $ans
shallowScript.sh
#!/bin/zsh
function __process {
while read input; do
echo $input | sed "s/e/E/g"
done }
print "Hello - this is shallowScript"
. ./deepScript.sh |& __process
(edited : outcome of this syntax and of 2 alternatives pasted below)
[UPDATE]
I have tried alternative syntaxes for last redirection . ./deepScript.sh |& __process and each syntax has a different outcome, but of course none of them is the one I want. I'll just paste each syntax and the resulting output of ./shallowScript.sh (where I typed "input" when read was waiting for an input), together with my findings so far.
Option 1 : . ./deepScript.sh |& __process
From this link, it seems that . ./deepScript.sh is run from a subshell, but not __process. Output:
zsh : ./shallowScript.sh
Hello - this is shallowScript
HEllo - this is dEEpScript
input
REading : input
Basically, the first two lines are printed as expected, then instead of printing the prompt REading :, the script directly waits for the stdin input, and then prints the prompt and executes print $ans.
Option 2: __process < <(. ./deepScript.sh)
Zsh's manpage indicates that (. ./deepScript.sh) will run as a subprocess. To me, that looks similar to Option 1. Output:
Hello - this is shallowScript
Reading : HEllo - this is dEEpScript
input
input
So, within . ./deepScript.sh, it prints read's prompt (script line 3) before the print (script line 2). Strange.
Option 3: __process < =(. ./deepScript.sh)
According to the same manpage, (. ./deepScript.sh) here sends its output to a temp file, which is then injected in __process (I don't know if there is a subprocess or not). Output:
Hello - this is shallowScript
Reading : input
HEllo - this is dEEpScript
input
Again, deepScript's line 3 prints to the terminal before line 2, but now it waits for the read to be complete.
Two questions:
Should this be expected?
Is there a fix or a workaround?
The observed delay stems from two factors:
deepScript.sh and process run asynchronously
read reads a complete line before returning
deepScript.sh writes the prompt to standard error, but without a newline. It then waits for your input while process continues to wait for a full line to be written so that its call to read can finish.

Is it possible to stringify parameters in ruby/irb?

I'm new to ruby ... wondering if the following is possible:
I currently run a test app within irb (irb -r test.rb) and manually execute
various command implemented in test.rb. One of these functions is currently implemented as follows:
def cli(cmd)
ret=$client.Cli(cmd)
print ret, "\n"
end
Where $client.Cli() takes a string. I currently type the following in the IRB prompt
> cli "some command with parameters"
This is sent over socket and results are returned
I would like to be able to do this WITHOUT the quotes. This would be just for this command
Is there a way to do this generally in ruby? if not how would you extend irb to do this?
For those who know 'C' this would be like the following:
#define CLI(CMD) cli(#CMD)
CLI(Quadafi and Sheen walk into a bar...)
where the pre-processed output is:
cli("Quadafi and Sheen walk into a bar...")
Thanks
You could actually monkey patch the gets method of the IRB::StdioInputMethod and IRB::ReadlineInputMethod classes, and perform a rewrite if the cli method is called, by adding the following to your test.rb file:
module IRB
def self.add_quotes(str)
str.gsub(/^cli (..+?)(\\+)?$/, 'cli "\1\2\2"') unless str.nil?
end
class StdioInputMethod
alias :old_gets :gets
def gets
IRB::add_quotes(old_gets)
end
end
class ReadlineInputMethod
alias :old_gets :gets
def gets
IRB::add_quotes(old_gets)
end
end
end
This way, any input line matching cli ... will be replaced with cli "..." before it's evaluated.
I don't think it's possible, because the command that you type to irb has to parse as ruby, and all those bare words will report errors like this:
NameError: undefined local variable or method `hello' for main:Object
(My first attempt, I just called it via cli hello.)
But if you didn't mind a more radical change, you could do something like this:
$ cat /tmp/test_cases
hello world
one
two
three
riding the corpse sled
$ ruby -e 'def f(arg) puts arg end' -ne 'f($_)' < /tmp/test_cases
hello world
one
two
three
riding the corpse sled
I just defined a simple function f() here that would show how it works; you could replace f($_) with $Client.cli($_), and set up the $Client global variable in the first -e argument. And you can leave off the < /tmp/test_cases if you want to type them in interactively:
$ ruby -e 'def f(arg) puts arg end' -ne 'f($_)'
hello
hello
world
world
Of course, if you want it to be any more advanced than this, I'd just write a script to do it all, rather than build something hideous from -pe or -ne commands.

Resources