Bash "command not found" error - bash

I am trying to edit my .bashrc file with a custom function to launch xwin. I want it to be able to open in multiple windows, so I decided to make a function that accepts 1 parameter: the display number. Here is my code:
function test(){
a=$(($1-0))
"xinit -- :$a -multiwindow -clipboard &"
}
The reason why I created a variable "a" to hold the input is because I suspected that the input was being read in as a string and not a number. I was hoping that taking the step where I subtract the input by 0 would convert the string into an integer, but I'm not actually sure if it will or not. Now, when I call
test 0
I am given the error
-bash: xinit -- :0 -multiwindow -clipboard &: command not found
How can I fix this? Thanks!

Because the entire quoted command is acting as the command itself:
$ "ls"
a b c
$ "ls -1"
-bash: ls -1: command not found
Get rid of the double quotation marks surrounding your xinit:
xinit -- ":$a" -multiwindow -clipboard &

In addition to the double-quotes bishop pointed out, there are several other problems with this function:
test is a standard, and very important, command. Do not redefine it! If you do, you risk having some script (or sourced file, or whatever) run:
if test $num -eq 5; then ...
Which will fire off xinit on some random window number, then continue the script as if $num was equal to 5 (whether or not it actually is). This way lies madness.
As chepner pointed out in a comment, bash doesn't really have an integer type. To it, an integer is just a string that happens to contain only digits (and maybe a "-" at the front), so converting to integer is a non-opertation. But what you might want to do is check whether the parameter got left off. You can either check whether $1 is empty (e.g. if [[ -z "$1" ]]; then echo "Usage: ..." >&2 etc), or supply a default value with e.g. ${1:-0} (in this case, "0" is used as the default).
Finally, don't use the function keyword. bash tolerates it, but it's nonstandard and doesn't do anything useful.
So, here's what I get as the cleaned-up version of the function:
launchxwin() {
xinit -- ":${1:-0}" -multiwindow -clipboard &
}

That happens because bash interprets everything inside quotes as a String. A command is an array of strings which the first element is a binary file or a internal shell command. Subsequent strings in the array are taken as argument.
When you type:
"xinit -- :$a -multiwindow -clipboard &"
the shell thinks that everything you wrote is a command. Depending on the command/program you ran all the rest of the arguments can be a single string. But mostly you use quotes only if you are passing a argument that has spaces inside like:
mkdir "My Documents"
That creates a single directory named My Documents. Also, you could escape spaces like this.
mkdir My\ Documents
But remember, "$" is a special character like "\". It gets interpreted by the shell as a variable. "$a" will be substituted by its value before executing. If you use a simple quote ('$a') it will not be interpreted by the shell.
Also, "&" is a special character that executes the command in background. You should probably pass it outside the quotes also.

Related

How to pass variables with special characters into a bash script when called from terminal

Hello all I have a program running on a linux OS that allows me to call a bash script upon a trigger (such as a file transfer). I will run something like:
/usr/bin/env bash -c "updatelog.sh '${filesize}' '${filename}'"
and the scripts job is to update the log file with the file name and file size. But if I pass in a file name with a single quote in its file name then it will break the script and give an error saying "Unexpected EOF while looking for matching `''"
I realize that a file name with a single quote is making the calling command an invalid one since the single quote is messing with the command itself. However I don't want to sanitize the variables if I can help it cause I would like my log to have the exact file name being displayed to easier cross reference it later. Is this possible or is sanitizing the only option here?
Thanks very much for your time and assistance.
Sanitization is absolutely not needed.
The simplest solution, assuming your script is properly executable (has +x permissions and a valid shebang line), is:
./updatelog.sh "$filesize" "$filename"
If for some reason you must use the bash -c, use single quotes instead of double quotes surrounding your code, and keep your data out-of-band from that code:
bash -c 'updatelog.sh "$#"' 'updatelog' "$filesize" "$filename"
Note that only updatelog.sh "$#" is inside the -c argument and parsed as code, and that this string is in single quotes, passed through without any changes whatsoever.
Following it are your arguments $0, $1 and $2; $0 is used when printing error messages, while $1 and $2 go into the list of arguments -- aka $# -- passed through to updatelog.sh.

How to create an bash alias for the command "cd ~1"

In BASH, I use "pushd . " command to save the current directory on the stack.
After issuing this command in couple of different directories, I have multiple directories saved on the stack which I am able to see by issuing command "dirs".
For example, the output of "dirs" command in my current bash session is given below -
0 ~/eclipse/src
1 ~/eclipse
2 ~/parboil/src
Now, to switch to 0th directory, I issue a command "cd ~0".
I want to create a bash alias command or a function for this command.
Something like "xya 0", which will switch to 0th directory on stack.
I wrote following function to achieve this -
xya(){
cd ~$1
}
Where "$1" in above function, is the first argument passed to the function "xya".
But, I am getting the following error -
-bash: cd: ~1: No such file or directory
Can you please tell what is going wrong here ?
Generally, bash parsing happens in the following order:
brace expansion
tilde expansion
parameter, variable, arithmetic expansion; command substitution (same phase, left-to-right)
word splitting
pathname expansion
Thus, by the time your parameter is expanded, tilde expansion is already finished and will not take place again, without doing something explicit like use of eval.
If you know the risks and are willing to accept them, use eval to force parsing to restart at the beginning after the expansion of $1 is complete. The below tries to mitigate the damage should something that isn't eval-safe is passed as an argument:
xya() {
local cmd
printf -v cmd 'cd ~%q' "$1"
eval "$cmd"
}
...or, less cautiously (which is to say that the below trusts your arguments to be eval-safe):
xya() {
eval "cd ~$1"
}
You can let dirs print the absolute path for you:
xya(){
cd "$(dirs -${1-0} -l)"
}

Script takes only first part of double quotes

Yesterday I asked a similar question about escaping double quotes in env variables, although It didn't solve my problem (Probably because I didn't explain good enough) so I would like to specify more.
I'm trying to run a script (Which I know is written in Perl), although I have to use it as a black box because of permissions issue (so I don't know how the script works). Lets call this script script_A.
I'm trying to run a basic command in Shell: script_A -arg "date time".
If I run from the command line, it's works fine, but If I try to use it from a bash script or perl scrip (for example using the system operator), it will take only the first part of the string which was sent as an argument. In other words, it will fail with the following error: '"date' is not valid..
Example to specify a little bit more:
If I run from the command line (works fine):
> script_A -arg "date time"
If I run from (for example) a Perl script (fails):
my $args = $ENV{SOME_ENV}; # Assume that SOME_ENV has '-arg "date time"'
my $cmd = "script_A $args";
system($cmd");
I think that the problem comes from the environment variable, but I can't use the one quote while defining the env variable. For example, I can't use the following method:
setenv SOME_ENV '-arg "date time"'
Because it fails with the following error: '"date' is not valid.".
Also, I tried to use the following method:
setenv SOME_ENV "-arg '"'date time'"'"
Although now the env variable will containe:
echo $SOME_ENV
> -arg 'date time' # should be -arg "date time"
Another note, using \" fails on Shell (tried it).
Any suggestions on how to locate the reason for the error and how to solve it?
The $args, obtained from %ENV as you show, is a string.
The problem is in what happens to that string as it is manipulated before arguments are passed to the program, which needs to receive strings -arg and date time
If the program is executed in a way that bypasses the shell, as your example is, then the whole -arg "date time" is passed to it as its first argument. This is clearly wrong as the program expects -arg and then another string for its value (date time)
If the program were executed via the shell, what happens when there are shell metacharacters in the command line (not in your example), then the shell would break the string into words, except for the quoted part; this is how it works from the command line. That can be enforced with
system('/bin/tcsh', '-c', $cmd);
This is the most straightforward fix but I can't honestly recommend to involve the shell just for arguments parsing. Also, you are then in the game of layered quoting and escaping, what can get rather involved and tricky. For one, if things aren't right the shell may end up breaking the command into words -arg, "date, time"
How you set the environment variable works
> setenv SOME_ENV '-arg "date time"'
> perl -wE'say $ENV{SOME_ENV}' #--> -arg "date time" (so it works)
what I believe has always worked this way in [t]csh.
Then, in a Perl script: parse this string into -arg and date time strings, and have the program is executed in a way that bypasses the shell (if shell isn't used by the command)
my #args = $ENV{SOME_ENV} =~ /(\S+)\s+"([^"]+)"/; #"
my #cmd = ('script_A', #args);
system(#cmd) == 0 or die "Error with system(#cmd): $?";
This assumes that SOME_ENV's first word is always the option's name (-arg) and that all the rest is always the option's value, under quotes. The regex extracts the first word, as consecutive non-space characters, and after spaces everything in quotes.† These are program's arguments.
In the system LIST form the program that is the first element of the list is executed without using a shell, and the remaining elements are passed to it as arguments. Please see system for more on this, and also for basics of how to investigate failure by looking into $? variable.
It is in principle advisable to run external commands without the shell. However, if your command needs the shell then make sure that the string is escaped just right to to preserve quotes.
Note that there are modules that make it easier to use external commands. A few, from simple to complex: IPC::System::Simple, Capture::Tiny, IPC::Run3, and IPC::Run.
I must note that that's an awkward environment variable; any way to ogranize things otherwise?
† To make this work for non-quoted arguments as well (-arg date) make the quote optional
my #args = $ENV{SOME_ENV} =~ /(\S+)\s+"?([^"]+)/;
where I now left out the closing (unnecessary) quote for simplicity

Calling a shell command from Applescript with quotes

This seems like it should be simple, but I'm pulling out my remaining hair trying to get it to work. In a shell script I want to run some Applescript code that defines a string, then pass that string (containing a single quote) to a shell command that calls PHP's addslashes function, to return a string with that single quote escaped properly.
Here's the code I have so far - it's returning a syntax error.
STRING=$(osascript -- - <<'EOF'
set s to "It's me"
return "['test'=>'" & (do shell script "php -r 'echo addslashes(\"" & s & "\");") & "']"
EOF)
echo -e $STRING
It's supposed to return this:
['test'=>'It\'s me']
First, when asking a question like this, please include what's happening, not just what you're trying to do. When I try this, I get:
42:99: execution error: sh: -c: line 0: unexpected EOF while looking for matchin
sh: -c: line 1: syntax error: unexpected end of file (2)
(which is actually two error messages, with one partly overwriting the other.) Is that what you're getting?
If it is, the problem is that the inner shell command you're creating has quoting issues. Take a look at the AppleScript snippet that tries to run a shell command:
do shell script "php -r 'echo addslashes(\"" & s & "\");"
Since s is set to It's me, this runs the shell command:
php -r 'echo addslashes("It's me");
Which has the problem that the apostrophe in It's me is acting as a close-quote for the string that starts 'echo .... After that, the double-quote in me"); is seen as opening a new quoted string, which doesn't get closed before the end of the "file", causing the unexpected EOF problem.
The underlying problem is that you're trying to pass a string from AppleScript to shell to php... but each of those has its own rules for parsing strings (with different ideas about how quoting and escaping work). Worse, it looks like you're doing this so you can get an escaped string (following which set of escaping rules?) to pass to something else... This way lies madness.
I'm not sure what the real goal is here, but there has to be a better way; something that doesn't involve a game of telephone with players that all speak different languages. If not, you're pretty much doomed.
BTW, there are a few other dubious shell-scripting practices in the script:
Don't use all-caps variable named in shell scripts. There are a bunch of all-caps variables that have special meanings, and if you accidentally use one of those for something else, weird results can happen.
Put double-quotes around all variable references in scripts, to avoid them getting split into multiple "words" and/or expanded as shell wildcards. For example, if the variable string was set to "['test'=>'It\'s-me']", and you happened to have files named "t" and "m" in the current directory, echo -e $string will print "m t" because those are the files that match the [] pattern.
Don't use echo with options and/or to print strings that might contain escapes, since different versions treat these things differently. Some versions, for example, will print the "-e" as part of the output string. Use printf instead. The first argument to printf is a format string that tells it how to format all of the rest of the arguments. To emulate echo -e "$string" in a more reliable form, use printf '%b\n' "$string".
To complement Gordon Davisson's helpful answer with a pragmatic solution:
Shell strings cannot contain \0 (NUL) characters, but the following sed command emulates all other escaping that PHP's (oddly named) addslashes PHP function performs (\-escaping ', " and \ instances):
string=$(osascript <<'EOF'
set s to "It's me\\you and we got 3\" of rain."
return "['test'=>'" & (do shell script "sed 's/[\"\\\\'\\'']/\\\\&/g' <<<" & quoted form of s) & "']"
EOF
)
printf '%s\n' "$string"
yields
['test'=>'It\'s me\\you and we got 3\" of rain.']
Note the use of quoted form of, which is crucial for passing a string from AppleScript to a do shell script shell command with proper quoting.
Also note how the closing here-doc delimiter, EOF, is on its own line to ensure that it is properly recognized (in Bash 3.2.57, as used on macOS 10.12, (also when called as /bin/sh, which is what do shell script does), this isn't strictly necessary, but Bash 4.x would rightfully complain about EOF) with warning: here-document at line <n> delimited by end-of-file (wanted 'EOF')

Bash- passing input without shell interpreting parameter expansion chars

So I have a script where I type the script.sh followed by input for a set of if-else statements. Like this:
script.sh fnSw38h$?2
The output echoes out the input in the end.
But I noticed that $? is interpreted as 0/1 so the output would echo:
fnSw38h12
How can I stop the shell from expanding the characters and take it face value?
I looked at something like opt noglob or something similar but they didn't work.
When I put it like this:
script.sh 'fnSw38h$?2'
it works. But how do I capture that within single quotes ('') when I can't state variables inside it like Var='$1'
Please help!
How to pass a password to a script
I gather from the comments that the true purpose of this script is to validate a password. If this is an important or sensitive application, you really should be using professional security tools. If this application is not sensitive or this is just a learning exercise, then read on for a first introduction to the issues.
First, do not do this:
script.sh fnSw38h$?2
This password will appear in ps and be visible to any user on the system in plain text.
Instead, have the user type the password as input to the script, such as:
#!/bin/sh
IFS= read -r var
Here, read will gather input from the keyboard free from shell interference and it will not appear in ps output.
var will have the password for you to verify but you really shouldn't have plain text passwords saved anywhere for you to verify against. It is much better to put the password through a one-way hash and then compare the hash with something that you have saved in a file. For example:
var=$(head -n1 | md5sum)
Here, head will read one line (the password) and pass it to md5sum which will convert it to a hash. This hash can be compared with the known correct hash for this user's password. The text returned by head will be exactly what the user typed, unmangled by the shell.
Actually, for a known hash algorithm, it is possible to make a reverse look-up table for common passwords. So, the solution is to create a variable, called salt, that has some user dependent information:
var=$( { head -n1; echo "$salt"; } | md5sum)
The salt does not have to be kept secret. It is just there to make look-up tables more difficult to compute.
The md5sum algorithm, however, has been found to have some weaknesses. So, it should be replaced with more recent hash algorithms. As I write, that would probably be a sha-2 variant.
Again, if this is a sensitive application, do not use home-made tools
Answer to original question
how do I capture that within single quotes ('') when I can't state variables inside it like Var='$1'
The answer is that you don't need to. Consider, for example, this script:
#!/bin/sh
var=$1
echo $var
First, note that $$ and $? are both shell variables:
$ echo $$ $?
28712 0
Now, let's try our script:
$ bash ./script.sh '$$ $?'
$$ $?
These variables were not expanded because (1) when they appeared on the command line, they were in single-quotes, and (2) in the script, they were assigned to variables and bash does not expand variables recursively. In other words, on the line echo $var, bash will expand $var to get $$ $? but there it stops. It does not expand what was in var.
You can escape any dollar signs in a double-quoted string that are not meant to introduce a parameter expansion.
var=foo
# Pass the literal string fnSw38h$?2foo to script.sh
script.sh "fnSw38h\$?2$var"
You cannot do what you are trying to do. What is entered on the command line (such as the arguments to your script) must be in shell syntax, and will be interpreted by the shell (according to the shell's rules) before being handed to your script.
When someone runs the command script.sh fnSw38h$?2, the shell parses the argument as the text "fnSw38h", followed by $? which means "substitute the exit status of the last command here", followed by "2". So the shell does as it's been told, it substitutes the exit status of the last command, then hands the result of that to your script.
Your script never receives "fnSw38h$?2", and cannot recover the argument in that form. It receives something like "fnSw38h02" or "fnSw38h12", because that's what the user asked the shell to pass it. That might not be what the user wanted to pass it, but as I said, the command must be in shell syntax, and in shell syntax an unescaped and unnquoted $? means "substitute the last exit status here".
If the user wants to pass "$?" as part of the argument, they must escape or single-quote it on the command line. Period.

Resources