I have the following Ruby code:
cmd="
source= $(mktemp)
echo source
"
system("#{cmd}")
system("source= $(mktemp)")
I wanted the code to execute the "mktemp" command and output the temporary file name to the variable "source". However, the error message I get is:
sh: /tmp/tmp.EpXeLNkqjN: Permission denied
sh: /tmp/tmp.wVCqdqHSpp: Permission denied
------------------
(program exited with code: 0)
Press return to continue
The error was the same even when I ran the program as root.
However, when I run the mktemp command only, there is no problem. What is wrong?
You must not have a space with the = symbol. Replace your code with
cmd="
source=$(mktemp)
echo $source
"
system("#{cmd}")
system("source=$(mktemp)")
Notice: no space after the = sign.
The problem with leaving a space after the = sign is that sh will try to execute the command given by the expansion of $(mktemp) (i.e., the command /tmp/tmp.EpXeLNkqjN or something similar — which doesn't exist) with the variable source being set to the empty string.
Problem: Invalid Shell Syntax
You have a number of errors in your code, including illegal whitespace, failure to dereference a variable properly, and potential IFS or quoting issues.
Solution: Use Correct Shell Syntax
Using legal Bash syntax works fine. For example:
cmd='source=$(mktemp); echo "$source"'
system(cmd)
On my system, this correctly prints the expected result on standard output, and returns correctly. For example, pry shows:
/tmp/tmp.of89uLTUqf
=> true
Better Solution: Use Backticks
Rather than shelling out using Kernel#system, why not just assign the variable in Ruby using backticks? For example:
source = `mktemp`
# => "/tmp/tmp.KVhGMzZRiG\n"
This seems simpler and less error-prone.
Make sure you're setting the file's permissions to be executable.
Related
I am a beginner of bash. I encounter a problem like this:
$ "make -p"
when I type the above in bash command line, there is nothing to happen, no error, no result msg.
I have searched double quotes syntax of bash in many websites. All of these materials give similar interpretation as below:
https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html
and give examples like:
echo "argument"
I do not find something like "echo argument". Moreover, I find a strange difference between bash command line and bash scripts.
If I type a non-existing command in command line:
$ "holy shit"
$ "look that"
there is nothing to happen. But if I type it in bash scripts:
#!/bin/bash
"holy shit"
"look that"
and execute this script, an error msg will be throw out:
$ ./myshell
./myshell: line 2: holy shit: command not found
./myshell: line 3: look that: command not found
Would someone can help give a detailed interpretation about the effect of double quotes when they enclosed the whole command?
Why there is no output in command-line?
Why it is different between command line and scripts?
If you enter a command foo, the shell searches the directories listed in your PATH variable until it finds a command of this name. If there is none, you get the error message command not found.
If you enter a command, which contains at least one slash - for example ./foo or foo/bar -, the shell does not search the PATH, but assumes that you have already entered the correct path to your command. If it does not exist, you get the error message No such file or directory.
In your case,
"cd home"
searches for a file with name cd home somewhere along your PATH, but there is no file of this name, and you get command not found. If you enter
"cd /home"
the shell bypasses PATH-search and assumes, that there exists a directory named cd (i.e. the 3 letters c,d,space) in your current directory, and below it a file named home, with x-bit set. There is no such file (and no such directory) on your system, and you get the error message No such file or directory.
If you are in the mood of experimenting around, you could try the following:
mydir="cd "
mkdir "$mydir"
echo "echo Hello Stranger" >"$mydir/home"
chmod +x "$mydir/home"
"cd /home"
This should print Hello Stranger. Pay attention that in the assignment to mydir, there must be a single space between the cd and the closing quote.
The double quotes mean it is a string. You can do something like:
echo "Hello everybody"
either at the command line or the shell. Sometimes when people put stuff in quotes. you are supposed to replace what is in quotes with your own variable (removing the quotes), and sometimes people put quotes around the whole command you are supposed to type to show the what exactly you should type. For your example of "make -p" just type it without the quotes and it should work in both the command line and as a script.
I'm new to unix and its developing. In my new.sh script I wrote
$USERNAME=user
$PASSWORD=sekrit
echo $USERNAME
and ran new.sh using bash new.sh
But I get the following errors
new.sh: line 1: =user: command not found
new.sh: line 2: =sekrit: command not found
How do I run that command and print the username variable in terminal?
USERNAME is the name of the variable. $USERNAME is the replacement (aka contents, aka value). Since USERNAME is empty, you effectively try to run a command named =user, which is what the error message tells you.
Remove the $ from $USERNAME=... and it will work.
As Jens notes in his answer, the problem is that an assignment to a variable is not prefixed with a $, so:
USERNAME=user
PASSWORD=sekrit
is the way to write what you wanted. You got an error because USERNAME was not set, so after expansion, the shell looked at the command as:
=user
=sekrit
and it could not find such commands on the system (not very surprisingly). However, be aware that if you have previously written:
USERNAME=archipelago
PASSWORD=anchovy
then the lines:
$USERNAME=user
$PASSWORD=sekrit
would have been equivalent to writing:
archipelago=user
anchovy=sekrit
You could see that by running set with no arguments; it would show you the values of all the variables set in the shell. You could search for words such as USERNAME and archipelago to see what happened.
Now you've learned that, forget it. The number of times you'll need to use it is very limited (but it is handy on those rare — very rare — occasions when you need it).
For all practical purposes, don't write a $ on the left-hand side of a variable assignment in shell.
Is there a generic way in a bash script to "try" something but continue if it fails? The analogue in other languages would be wrapping it in a try/catch and ignoring the exception.
Specifically I am trying to source an optional satellite script file:
. $OPTIONAL_PATH
But when executing this, if $OPTIONAL_PATH doesn't exist, the whole script screeches to a halt.
I realize I could check to see if the file exists before sourcing it, but I'm curious if there is a generic reusable mechanism I can use that will ignore the error without halting.
Update: Apparently this is not normal behavior. I'm not sure why this is happening. I'm not explicitly calling set -e anywhere ($- is hB), yet it halts on the error. Here is the output I see:
./script.sh: line 36: projects/mobile.sh: No such file or directory
I added an echo "test" immediately after the source line, but it never prints, so it's not anything after that line that is exiting. I am running Mac OS 10.9.
Update 2: Nevermind, it was indeed shebanged as #!/bin/sh instead of #!/bin/bash. Thanks for the informative answer, Kaz.
Failed commands do not abort the script unless you explicitly configure that mode with set -e.
With regard to Bash's dot command, things are tricky. If we invoke bash as /bin/sh then it bails the script if the . command does not find the file. If we invoke bash as /bin/bash then it doesn't fail!
$ cat source.sh
#!/bin/sh
. nonexistent
echo here
$ ./source.sh
./source.sh: 3: .: nonexistent: not found
$ ed source.sh
35
1s/sh/bash/
wq
37
$ ./source.sh
./source.sh: line 3: nonexistent: No such file or directory
here
It does respond to set -e; if we have #!/bin/bash, and use set -e, then the echo is not reached. So one solution is to invoke bash this way.
If you want to keep the script maximally portable, it looks like you have to do the test.
The behavior of the dot command aborting the script is required by POSIX. Search for the "dot" keyword here. Quote:
If no readable file is found, a non-interactive shell shall abort; an interactive shell shall write a diagnostic message to standard error, but this condition shall not be considered a syntax error.
Arguably, this is the right thing to do, because dot is used for including pieces of the script. How can the script continue when a whole chunk of it has not been found?
Otherwise arguably, this is braindamaged behavior inconsistent with the treatment of other commands, and so Bash makes it consistent in its non-POSIX-conforming mode. If programmers want a command to fail, they can use set -e.
I tend to agree with Bash. The POSIX behavior is actually more broken than initially meets the eye, because this also doesn't work the way you want:
if . nonexistent ; then
echo loaded
fi
Even if the command is tested, it still aborts the script when it bails.
Thank GNU-deness we have alternative utilities, with source code.
You have several options:
Make sure set -e wasn't used, or turn it off with set +e. Your bash script should not exit by default simply because the . command failed.
Test that the file exists prior to sourcing.
[ -f "$OPTIONAL_PATH" ] && . "$OPTIONAL_PATH"
This option is complicated by the fact that if $OPTIONAL_PATH does not contain
any slashes, . will still try to find the file in your path.
If you want to keep set -e on, "hide" the failure like this:
. "$OPTIONAL_PATH" || true
Even if the source fails, the exit status of the command list as a whole will be 0, due to the || true.
(Much of this is covered [better] by Kaz's answer, especially the references to the POSIX standard, but I wasn't sure when or if he would undelete his answer.)
This is not the default behavior. Did you set -e or use #!/bin/bash -e anywhere in your script, to make it automatically exit on failure?
If so, you can use
. $OPTIONAL_PATH || true
to continue anyways.
I've been putting together a bash script that takes an ini file (with a format that I've been developing alongside the script) and reads through the file, performing the actions specified.
One of the functions in the ini format allows for a shell command to be passed in and run using eval. I'm running into a problem when the commands contain a variable name.
eval (or the shell in general) doesn't seem to be substituting the values correctly and most of the time it seems to replace all the variable names with blanks, breaking the command. Subshells to create a string output seem to have the same problem.
The strange part is that this worked on my development machine (Running linux mint 13), but when I moved the script to the target machine running CentOS 5.8, these issues showed up.
Some examples of code I read in from the ini file:
shellcmd $toolspath/program > /path/file
shellcmd parsedata=$( cat /path/file )
These go through a script function that strips off the leading shellcmd and then evals the string using
eval ${scmd}
Any ideas on what might be causing the weird behavior and anything I can try to resolve the problem? My ultimate goal here is to have the ability to read in a line from a file and have my script execute it and be able to correctly handle script variables from the read in command.
Using Bash 3.2.25 (CentOS 5) I tried this, and it works fine:
toolspath='/bin'
while read prefix scmd
do
if [[ $prefix == 'shellcmd' ]]
then
echo "Evaluating: <$scmd>"
eval ${scmd}
else
echo "$prefix ignored"
fi
done < ini
with:
shellcmd $toolspath/ls > /home/user1/file
shellcmd parsedata=$( cat /home/user1/file )
shellcmd echo $parsedata
I obviously had to set paths. Most likely you had to change the paths when you switched machines. Do your paths have embedded spaces?
How did you transfer the files? Did you perchance go via Windows? On a whim I did a unix2dos on the ini file and I got similar symptoms to that you describe. That's my best guess.
I found a suitable alternative for the one command that what causing this issue so I'll mark this question as solved.
From all my investigation it appears that I've discovered an obscure bug in the bash shell. The particular command I was trying to eval returned a terminal code in it's output, and because the shell was in a read loop with input redirected from a file it resulted in some strange behavior. My solution was to move the call to this particular command outside of the read loop. It still doesn't solve the root problem, which I believe to be a bug in the bash shell. Hope this will help someone else who has run into this same (obscure) issue.
This may be a general question, but I'm new to octave and want to get a string from the command line. However, I'm not sure what format the command line arguments are in. I have attempted typing:
myscript hi
myscript --hi
myscript -hi
myscript (hi)
at the command line but I keep on getting this error:
error: invalid use of script "myscript filepath" in index expression
so I'm apparently not calling this correctly. The --hi is the format that is shown on the official website but it doesn't appear to work for me. This script, I got off online just to test:
#! /usr/bin/octave -qf
printf("%s", program_name());
arg_list = argv();
for i = 1:nargin
printf(" %s", arg_list{i});
end
printf("\n");
Is there something I need to implement in order for argv to work?
I am starting too.
Says you have an error in path name. You specify nothing explicit for path (ie c:\root\myfiles\filex.txt) so it probably assumes the file is in your current directory.
If you type ls can you see your file? You can either move the file to the current directory or use the cd command to change the current pointer to where the file is.