I have an OTP based Erlang application that seems to behave weird.
I want to connect to the erlang shell and trace exactly what is happening.
I can do all my calls to dbg:tracer(), dbg:tp() etc. just fine, however no output is sent to my shell.
I think this might be, because I am connecting via a remote shell.
However, when I call dbg:n(wiwob#vlxd38-wob). I get an error:
** exception error: bad argument in an arithmetic expression
in operator -/2
called as wiwob#vlxd38 - wob
How can I find out which shell the output is sent to and pipe it to my shell?
The argument to dbg:n/1 must be an atom and wiwob#vlxd38-wob is not an atom, it needs to quoted like 'wiwob#vlxd38-wob'. For the syntax of an atom, and other data types, see Atoms.
I cannot help you for the dbg problem, you do not give enough information about how you connect the debugger to a process, module ...
For the second point the error is self explanatory, parsing the expression wiwob#vlxd38-wob, the shell try to execute
wiwob#vlxd38 minus wob, which is impossible with 2 atoms.
the function dbg:n/1 has the folowing specs:
n(Nodename) -> {ok, Nodename} | {error, Reason}
Nodename = atom()
Reason = term()
so you must write your node name as 'wiwob#vlxd38-wob' in order to force the whole expression to be a single atom.
Related
Test-NetConnection returns TRUE when run manually but when in a looping script, only some of the ports returns TRUE.
I wrote a powershell script that loops through port numbers to do a Test-NetConnection:
$machine = '[targetmachinename]'
$this_machine = $env:COMPUTERNAME
$port_arr = #(8331, 8332, 8333, 8334, 8335, 8310, 8311)
foreach ($port in $port_arr) {
Test-NetConnection $machine.domain.name.com -port $port -InformationLevel Quiet
}
When I run the script, it always returns TRUE on the same two port numbers and returns FALSE on the other ports.
When I manually run the code for each port, they each come back as TRUE for all ports.
I have tried messing around with the port numbers by removing, adding, and moving them around but it always gives the same results with only the same two port numbers returning TRUE.
I suspected maybe the variable, array, foreach loop or something might be bad, but if that was the case, why would it work for the same two ports and not for the others even when I change up the array?
I was thinking about putting a delay or wait in between loops but have not tested it yet.
This script works fine when run locally from the target machine. Having this issue when running the script from another machine.
UPDATE:
Looking at the powershell log:
Command start time: 20191111121539
**********************
PS>TerminatingError(New-Object): "Exception calling ".ctor" with "2" argument(s): "No connection could be made because the target machine actively refused it [IPADDRESS]:[PORT]""
I noticed that the IPADDRESS does not match up with the target machine name, but instead matches up with the source machine.
I replaced the $machine.domain.name.com to the actual ip address of the machine and that got the script working as expected.
Why does $machine.domain.name.com resolve to the source machine? Even if I concatenate that incorrectly, wouldn't that normally become an unresolved address and error? Shouldn't all port checks have failed at that point?
tl;dr
Replace argument
$machine.domain.name.com
with
"$machine.domain.name.com"
While unquoted command arguments in PowerShell are typically treated as expandable strings - i.e., as if they were implicitly enclosed in "...", this is not the case if your argument starts with a variable reference such as $machine.
In that case, PowerShell tries to evaluate the argument as an expression, and since [string] variable $machine has no .domain property (and subsequent nested properties), the entire argument effectively evaluates to $null[1] - resulting in inadvertent targeting of the local machine by Test-NetConnection.
The subtleties around how PowerShell parses unquoted command arguments:
are explored in this answer.
what the design rationale behind these subtleties may be is the subject of this GitHub issue.
Conversely, to learn about how expandable strings (string interpolation) - variable references and expressions embedded in "..." - work in PowerShell,
see this answer.
Additionally, BACON observes the following regarding the use of -InformationLevel Quiet with Test-NetConnection:
I think passing -InformationLevel Quiet was actively impairing debugging in this case. Given $machine = 'foo', compare the output (particularly the ComputerName property) of:
Test-NetConnection $machine.domain.name.com -InformationLevel Quiet
vs.
Test-NetConnection $machine.domain.name.com
vs.
Test-NetConnection "$machine.domain.name.com".
In other words, [it's best to] ensure that the cmdlet (and its parameters) is behaving as expected before passing the parameter that says "I don't care about all that information. Just tell me if it passed or failed."
[1] $null is the effective result by default or if Set-StrictMode -Version 1 is in effect; with Set-StrictMode -Version 2 or higher, you would actually get an error.
A common mistake I've seen people make (myself included) is in your variable name and usage in powershell. For example I forgot $ all the time. This is just looping through my machine as an example, but it tests all these ports correctly.
$port_arr = #(139,3389,5040)
$mac = #("myComputer")
foreach ($mc in $mac){
foreach ($i in $port_arr) {
Test-NetConnection $mc -port $i
}
}
Do you have an example of your powershell code? Also, have you stepped through to determine that it's working as expected?
I came across a mind-blowing weird script that crashes the console:
set "h=/?" & call [if | for | rem] %%h%%
IF, FOR and REM aren't normal internal commands.
They use an own special parser, which possibly caused some interception errors so it crashed.
#jeb pointed out CALL doesn't execute the following special characters, but instead convert them into a "token" (version dependent):
& returns /
&& returns 1
| returns 2
|| returns 0
/? returns <
# returns +
#() returns ;
#if a==a : returns ,
#for %a in () do : returns +
#rem : returns -
However, even though they have unique parsers, it still doesn't explain why they all crash. So I did some testing:
Remove call
C:\>set "h=/?" & for %h%
%%h%% was unexpected at this time.
Change the command to something else. (I tried all other internal commands, none works)
Seperate two commands:
C:\>set "h=/?"
C:\>call for %%h%%
--FOR help message--
Add #
C:\>set "h=/?" & call for #%%h%%
CRASH!!!
Surround the scriptblock by ()
C:\>set "h=/?" & call for (%%h%%)
CRASH!!!
Summary of question:
What role does call play?
What caused the parser to crash?
The CALL is necessary to start a second round of the parser.
But there is a small bug (or more), in that phase it's not possible to execute any of the special commands or using &, |, &&, ||, redirection or command blocks.
The cause seems to be, that the parser build internally a token graph, replacing the special things into some kind of token values.
But with CALL the executer doesn't know anymore how to handle them.
This code tries to execute a batch file, named 3.bat !!!
(The name can be different, depending on the windows version)
set "cmd=(a) & (b)"
call %%cmd%%
But in your sample, the help function is triggered on a non executable token.
That seems to be the final death trigger for the executer to be completely out of sanity.
Summary of Research:
Calling linefeeds \n or FOR, IF & REM's help function crashes cmd, exiting with ERRORLEVEL -1073741819 aka 0xC0000005, which indicates an access violation error.
First, the cmd parser tries to start werfault to terminate the process.
If you prematurely terminate werfault, an error message will appear!
Access violation error:
The instruction at 0x00007FF7F18E937B referenced memory at 0x0000000000000070. The memory could not be read.
It is conjectured that if, for and rem uses special parsers, but when the help function is triggered by call, a non-command token is returned, which crashes the cmd parser.
Sources:
Why I can't CALL "IF" and "FOR" neither in batch nor in the cmd?
CALL me, or better avoid call
Limit CMD processing to internal commands, safer and faster?
I am reading Software foundations book and I came across a command that declares parameters
as implicit:
Arguments nil {X}.
where, for example:
Inductive list (X:Type) : Type :=
| nil : list X
| cons : X -> list X -> list X.
However, whenever I try to execute such commands I get the following message:
Error: No focused proof (No proof-editing in progress).
The same message appears even if I try to compile scripts that come with the book. What could be the problem?
I am using Coq version 8.3pl4 and CoqIDE editor.
I just tried it on my (somewhat old) Coq 8.4 and I don't have any problem with the implicit declaration.
However if I write Argument instead of Arguments (notice the lack of "s"), I get
Error: Unknown command of the non proof-editing mode.
Did you correctly spelled it ?
EDIT: sorry, I miss-read your version. It seems that the Arguments command has been added post 8.4 (it does not appear here but appears here. I advise you update your Coq version if possible, or restrict to using 8.3 Implicit related commands (wild guess: Implicit Arguments foo.)
I have been using sylfilter for over a year now (it is available from http://sylpheed.sraoss.jp/sylfilter/) and it works great as a filtering tool (no complaints). However, I have been trying to use procmail with sylfilter, but have been having a lot of trouble.
The web page for the filter shows:
sylfilter ~/Mail/inbox/1234
as the example to classify a message.
The return values are as following:
0 junk (spam)
1 clean (non-spam)
2 uncertain
127 other errors
I have been trying to incorporate sylfilter with procmail but not with much success. The big issue as compared with some other spam tool like bogofilter is that sylfilter does not make any changes to the e-mail message itself
(unlike bogofilter, for which examples abound on the web, and which
puts in a X-Bogosity field in the message header). I want everything
that is classified as Junk to go to $HOME/Mail/Junk and everything that
is not to be further classified into folders such as procmail rules.
Perhaps the stuff that returns 2 can go to $HOME/Mail/uncertain.
Here is my latest attempt based on suggestions made in the Fedora mailing list.
:0 Wc
| /usr/bin/sylfilter /dev/stdin
:0 a
$HOME/Mail/Junk/.
However, this does not process the e-mail message using sylfilter (and
the logfile says "No input file." before going on to process the other
rules). So, I was wondering if anyone here knew of a similar case and knew the answer to this question.
I am not familiar with sylfilter, and the (somewhat vague) problem description makes me think there is something wrong with feeding it a message on standard input. But if you can make that work, the following is how you examine a program's exit code in Procmail.
:0
* ? sylfilter /dev/stdin
$HOME/Mail/Junk/.
# You should now have the exit code in $? if you want it for further processing
SYLSTATUS=$?
:0
* SYLSTATUS ?? ^^1^^
$HOME/Mail/INBOX/.
# ... etc
The condition succeeds if sylfilter returns a success (zero) exit code; if it fails, we fall through to subsequent recipes. We save $? to a named variable so that we can examine its value even if a subsequent recipe resets the system global $? by invoking some other external program.
By the by, you should not need to hard-code the path to sylfilter. If it's in a nonstandard location, amend the PATH at the beginning of your .procmailrc rather than littering your code with explicit paths to executables. So if it's in /usr/local/really/sf/sylfilter, you'd put
PATH=/usr/local/really/sf:$PATH
If you need the message in a temporary file, try something like this;
TMP=`mktemp -t sylf.XXXXXXXX`
TRAP='rm -f $TMP'
:0c
$TMP
:0
* ? sylfilter $TMP
$HOME/Mail/Junk/.
# etc as above
The mktemp command creates a unique temporary file. The TRAP assignment sets up a command sequence to run when Procmail terminates; this takes care of cleaning out the temporary file when we are done. Because we will be the only writer to this file, we don't care about locking while writing a copy of the message to this file.
For more nitty-gritty syntax details, see also http://www.iki.fi/era/procmail/quickref.html
I have a few lines of code in Stata. I'd like the lines to be executed only if the .txt file to which the lines refer exist a priori. I am wondering whether there is a shell command that I can use for this that I can embed in an if statement.
For example might something like the following exist and be possible:
insheet using "file.txt" if ('file.txt')
My intent is to say insheet the file file.txt only if it exists. My concern is that the program would otherwise stop, fail, die, or whatever you call it due to a syntax error if I have that insheet statement but the file does not exist.
Immediate answer is No. There is nothing like that syntax for several reasons.
The if qualifier tests whether some condition is true separately for each observation and whether a file exists is not an appropriate condition for testing observation by observation.
The quite different if command tests once and once only whether something is true and might seem more appropriate. In practice it is not used for this purpose, but to learn more, see help ifcmd.
Stata has no special syntax based on paired identical single quotes ' '.
However, Stata provides a separate construct here
confirm file file.txt
In practice that is going to stop a do-file or program whenever the file does not exist and the file does not exist. A general scheme to catch the error is something like
capture confirm file file.txt
if _rc == 0 insheet using file.txt
else {
<code if the file does not exist>
}
capture is to be thought of as eating the return code from the confirm command. In general the return code _rc from any command is 0 if the command was valid and executed and some non-zero value otherwise. Sometimes one tests for a specific non-zero code. Experiment shows that file not found is return code 601. The main reason for looking up error codes (in [P] error) is to deliver official-looking error messages, but in practice knowing the zero/non-zero rule is the main detail under this heading.
The example here uses == to test for equality.
Note that insheet using file.txt is not strictly a syntax error if the file does not exist. As far as Stata's language is concerned, that is legal syntax. However, that is a fine distinction: it is an error in every ordinary sense.
(LATER) It would be possible to short-circuit the entire process
capture insheet using file.txt
if _rc != 0 {
<code if the file does not exist>
}
as in this case the non-existence of the file is the presumed explanation for any failure of the insheet command. If, however, the insheet call were more complicated, with a varlist and/or options, then failure of the command could arise for other reasons. So in general separating out a check for the existence of the file seems a better strategy.
The confirm command has what you're looking for.
capture confirm file "file.txt"
if !_rc { # if the file exists, confirm will return error code 0
insheet using "file.txt"
}
Alternatively, you could put a capture before the insheet command, which will catch the syntax error. Check the [P] manual for more on capture and confirm.