Here's an example:
#!/usr/bin/zsh
cd DOES_NOT_EXIST
Error: ./test.sh:cd:3: no such file or directory: DOES_NOT_EXIST
See how it prints the filename and line number in the error separated by those colons? I want to get rid of that so it looks the same as if it was executed from an interactive terminal. The filename will be replaced with the function name if executed from a function.
What I want:
cd: no such file or directory: DOES_NOT_EXIST
Ideally there's some way to configure these types of errors in ZSH (or a clever trick)? Only ZSH does this, not Bash.
I've tried using eval (basically trying to trick ZSH) but that doesn't work, it just puts (eval) in the error message then.
Related
I am a beginner of bash. I encounter a problem like this:
$ "make -p"
when I type the above in bash command line, there is nothing to happen, no error, no result msg.
I have searched double quotes syntax of bash in many websites. All of these materials give similar interpretation as below:
https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html
and give examples like:
echo "argument"
I do not find something like "echo argument". Moreover, I find a strange difference between bash command line and bash scripts.
If I type a non-existing command in command line:
$ "holy shit"
$ "look that"
there is nothing to happen. But if I type it in bash scripts:
#!/bin/bash
"holy shit"
"look that"
and execute this script, an error msg will be throw out:
$ ./myshell
./myshell: line 2: holy shit: command not found
./myshell: line 3: look that: command not found
Would someone can help give a detailed interpretation about the effect of double quotes when they enclosed the whole command?
Why there is no output in command-line?
Why it is different between command line and scripts?
If you enter a command foo, the shell searches the directories listed in your PATH variable until it finds a command of this name. If there is none, you get the error message command not found.
If you enter a command, which contains at least one slash - for example ./foo or foo/bar -, the shell does not search the PATH, but assumes that you have already entered the correct path to your command. If it does not exist, you get the error message No such file or directory.
In your case,
"cd home"
searches for a file with name cd home somewhere along your PATH, but there is no file of this name, and you get command not found. If you enter
"cd /home"
the shell bypasses PATH-search and assumes, that there exists a directory named cd (i.e. the 3 letters c,d,space) in your current directory, and below it a file named home, with x-bit set. There is no such file (and no such directory) on your system, and you get the error message No such file or directory.
If you are in the mood of experimenting around, you could try the following:
mydir="cd "
mkdir "$mydir"
echo "echo Hello Stranger" >"$mydir/home"
chmod +x "$mydir/home"
"cd /home"
This should print Hello Stranger. Pay attention that in the assignment to mydir, there must be a single space between the cd and the closing quote.
The double quotes mean it is a string. You can do something like:
echo "Hello everybody"
either at the command line or the shell. Sometimes when people put stuff in quotes. you are supposed to replace what is in quotes with your own variable (removing the quotes), and sometimes people put quotes around the whole command you are supposed to type to show the what exactly you should type. For your example of "make -p" just type it without the quotes and it should work in both the command line and as a script.
Same problem as this OP, but must be a seperate cause.
The following script:
#!/bin/sh
arr=("cat" "dog" "bird")
Works interactively (debian) but fails when called by crontab with:
/bin/sh: 2: /path/zero_check.sh: Syntax error: "(" unexpected
I've tried with #!/bin/bash shebang, and declaring array with declare -a arr=("cat" "dog" "bird"), to no effect.
Any idea why?
The problem here is that you are using this shebang:
#!/bin/sh
Whereas arrays are something Bash specific that shell does not allow.
So to make it work, change the shebang of your script to Bash:
#!/bin/bash
Specify your interpreter explicitly in the crontab entry. Use
bash /path/zero_check.sh
rather than
/path/zero_check.sh
Just for the documentation,
i had an old script to run which had an Syntax Error in the Shebang:
#/bin/bash
instead of
#!/bin/bash
Also check the Script is executable of course.
Very similar problem with incorrect bash function declarations. This works OK from the command line, but it causes cron to fail...
function test () { ... }
Cron should save the errors in /var/mail
I also recommend linting with "shellcheck" because it found another error I didn't notice.
I have the following Ruby code:
cmd="
source= $(mktemp)
echo source
"
system("#{cmd}")
system("source= $(mktemp)")
I wanted the code to execute the "mktemp" command and output the temporary file name to the variable "source". However, the error message I get is:
sh: /tmp/tmp.EpXeLNkqjN: Permission denied
sh: /tmp/tmp.wVCqdqHSpp: Permission denied
------------------
(program exited with code: 0)
Press return to continue
The error was the same even when I ran the program as root.
However, when I run the mktemp command only, there is no problem. What is wrong?
You must not have a space with the = symbol. Replace your code with
cmd="
source=$(mktemp)
echo $source
"
system("#{cmd}")
system("source=$(mktemp)")
Notice: no space after the = sign.
The problem with leaving a space after the = sign is that sh will try to execute the command given by the expansion of $(mktemp) (i.e., the command /tmp/tmp.EpXeLNkqjN or something similar — which doesn't exist) with the variable source being set to the empty string.
Problem: Invalid Shell Syntax
You have a number of errors in your code, including illegal whitespace, failure to dereference a variable properly, and potential IFS or quoting issues.
Solution: Use Correct Shell Syntax
Using legal Bash syntax works fine. For example:
cmd='source=$(mktemp); echo "$source"'
system(cmd)
On my system, this correctly prints the expected result on standard output, and returns correctly. For example, pry shows:
/tmp/tmp.of89uLTUqf
=> true
Better Solution: Use Backticks
Rather than shelling out using Kernel#system, why not just assign the variable in Ruby using backticks? For example:
source = `mktemp`
# => "/tmp/tmp.KVhGMzZRiG\n"
This seems simpler and less error-prone.
Make sure you're setting the file's permissions to be executable.
This may be a general question, but I'm new to octave and want to get a string from the command line. However, I'm not sure what format the command line arguments are in. I have attempted typing:
myscript hi
myscript --hi
myscript -hi
myscript (hi)
at the command line but I keep on getting this error:
error: invalid use of script "myscript filepath" in index expression
so I'm apparently not calling this correctly. The --hi is the format that is shown on the official website but it doesn't appear to work for me. This script, I got off online just to test:
#! /usr/bin/octave -qf
printf("%s", program_name());
arg_list = argv();
for i = 1:nargin
printf(" %s", arg_list{i});
end
printf("\n");
Is there something I need to implement in order for argv to work?
I am starting too.
Says you have an error in path name. You specify nothing explicit for path (ie c:\root\myfiles\filex.txt) so it probably assumes the file is in your current directory.
If you type ls can you see your file? You can either move the file to the current directory or use the cd command to change the current pointer to where the file is.
I've created a bash shell script file that I can run on my local bash (version 4.2.10) but not on a remote computer (version 3.2). Here's what I'm doing
A script file (some_script.sh) exists in a local folder
I've done $ chmod 755 some_script.sh to make it an executable
Now, I try $ ./some_script.sh
On my computer, this runs fine. On the remote computer, this returns a Command not found error:
./some_script.sh: Command not found.
Also, in the remote version, executable files have stars(*) following their names. Don't know if this makes any difference but I still get the same error when I include the star.
Is this because of the bash shell version? Any ideas to make it work?
Thanks!
The command not found message can be a bit misleading. The "command" in question can be either the script you're trying to execute or the shell specified on the shebang line.
For example, on my system:
% cat foo.sh
#!/no/such/dir/sh
echo hello
% ./foo.sh
./foo.sh: Command not found.
./foo.sh clearly exists; it's the interpreter /no/such/dir/sh that doesn't exist. (I find that the error message varies depending on the shell from which you invoke foo.sh.)
So the problem is almost certainly that you've specified an incorrect interpreter name on line one of some_script.sh. Perhaps bash is installed in a different location (it's usually /bin/bash, but not always.)
As for the * characters in the names of executable files, those aren't actually part of the file names. The -F option to the ls command causes it to show a special character after certain kinds of files: * for executables, / for directories, # for symlinks, and so forth. Probably on the remote system you have ls aliased to ls -F or something similar. If you type /bin/ls, bypassing the alias, you should see the file names without the append * characters; if you type /bin/ls -F, you should see the *s again.
Adding a * character in a command name doesn't do what you think it's doing, but it probably won't make any difference. For example, if you type
./some_script.sh*
the * is a wild card, and the command name expands to a list of all files in the current directory whose names match the pattern (this is completely different from the meaning of * as an executable file in ls -F output). Chances are there's only one such file, so
./some_script.sh* is probably equivalent to ./some_script.sh. But don't type the *; it's unnecessary and can cause unexpected results.