Applescript does not execute shell command - bash

I have an applescript
do shell script "echo -n -e \\\\x61\\\\x61\\\\x61 > /tmp/file.txt"
But the file.txt does not contain "aaa"!
It contains "-n -e aaa\n" instead.
Can someone help me with that problem?

Different versions of echo are hopelessly inconsistent in how they interpret command options (like -n and -e) and/or escape sequences in the string. It's not just bash vs. sh as cdarke said; it's much messier than that. The best thing to do is just avoid either one by using printf instead. It's a bit more complicated to use than echo, but completely worth it because your scripts won't break just because the latest version of the shell was compiled with different options(!).
In this case, using printf is actually even simpler than using echo, because it always interprets escape sequences (in its first argument, the "format string" -- the rest are different), and doesn't print a newline at the end (unless you explicitly tell it to with \n at the end of the format string). So your script becomes:
do shell script "printf \\\\x61\\\\x61\\\\x61 > /tmp/file.txt"
...although you can simplify it further by using single-quotes to keep the shell from interpreting escapes before they get to printf:
do shell script "printf '\\x61\\x61\\x61' > /tmp/file.txt"
(The escapes are still doubled, because they're being interpreted by AppleScript. But at least they don't need to be quadrupled anymore.)
(p.s. relevant xkcd)

Related

How does : # use perl eval 'exec perl -S $0 ${1+"$#"}' set the perl version?

I have to debug an old perl script which, apparently, is setting the version of perl in a way which I do not understand...
: # use perl
eval 'exec perl -S $0 ${1+"$#"}'
if 0;
perl -h says that -S means "look for programfile using PATH environment variable". $0 is the current program. And I've read that $# means "The Perl syntax error message from the last eval command." But why are they adding 1 to that ? And how does this all fit together ? What is it doing?
Part of what I have to debug has to do with the fact that it's picking an older version of perl that I don't want. For everything else, we use #!/usr/bin/env perl which, I suspect, may be doing the same thing. I also suspect that my solution may lie in fixing $PATH (or preventing the code that's goofing it up from goofing it up). But I'd like to go at this with a better understanding of how it's picking the version now.
Thanks for any help !
That's intended to run whatever version of perl is first in your path by treating the script first as a shell script that then executes perl. In this context, ${1+"$#"} is the arguments (if any) passed to the script.
From the bash manual:
${parameter:+word}
If parameter is null or unset, nothing is substituted, otherwise the expansion of word is substituted.
and
Omitting the colon results in a test only for a parameter that is unset
There's a similar example in perlrun:
This example works on many platforms that have a shell compatible with Bourne shell:
#!/usr/bin/perl
eval 'exec /usr/bin/perl -wS $0 ${1+"$#"}'
if $running_under_some_shell;
The system ignores the first line and feeds the program to /bin/sh, which proceeds to try to execute the Perl program as a shell script. The shell executes the second line as a normal shell command, and thus starts up the Perl interpreter. On some systems $0 doesn't always contain the full pathname, so the -S tells Perl to search for the program if necessary. After Perl locates the program, it parses the lines and ignores them because the variable $running_under_some_shell is never true. If the program will be interpreted by csh, you will need to replace ${1+"$#"} with $*, even though that doesn't understand embedded spaces (and such) in the argument list. To start up sh rather than csh, some systems may have to replace the #! line with a line containing just a colon, which will be politely ignored by Perl.
Using /usr/bin/env is another way to do the same thing, yes.

How to use echo command to output escape sequence for color

domain="www.google.com"
echo -e "\e[1;34m"$domain"\e[0m"
I expected this to output www.google.com in green letters.
Instead I got
-e \e[1;34mwww.google.com\e[0m
Depending the environment or shell used can have an effect, one thing you could probably do is to use ANSI-C quoting:
echo $'\e[1;34m'${domain}$'\e[0m'
Words of the form $'string' are treated specially. The word expands to
string, with backslash-escaped characters replaced as specified by the
ANSI C standard.
https://www.gnu.org/software/bash/manual/html_node/ANSI_002dC-Quoting.html
If you run a script with sh script.sh, you're explicitly using sh as the shell rather than the one in the shebang line. That's bad news if sh isn't a link to bash. A plain sh shell may not support echo -e.
Type ./script.sh to use the interpreter in the shebang line.

removing backslash with tr

So Im removing special characters from filenames and replacing with spaces. I have all working apart from files with single backslashes contained therein.
Note these files are created in the Finder on OS X
old_name="testing\this\folder"
new_name=$(echo $old_name | tr '<>:\\#%|?*' ' ');
This results in new_name being "testing hisolder"
How can I just removed the backslashes and not the preceding character?
This results in new_name being "testing hisolder"
This string looks like the result of echo -e "testing\this\folder", because \t and \f are actually replaced with the tabulation and form feed control characters.
Maybe you have an alias like alias echo='echo -e', or maybe the implementation of echo in your version of the shell interprets backslash escapes:
POSIX does not require support for any options, and says that the
behavior of ‘echo’ is implementation-defined if any STRING contains a
backslash or if the first argument is ‘-n’. Portable programs can use
the ‘printf’ command if they need to omit trailing newlines or output
control characters or backslashes.
(from the info page)
So you should use printf instead of echo in new software. In particular, echo $old_name should be replaced with printf %s "$old_name".
There is a good explanation in this discussion, for instance.
No need for printf
As #mklement0 suggested, you can avoid the pipe by means of the Bash here string:
tr '<>:\\#%|?*' ' ' <<<"$old_name"
Ruslan's excellent answer explains why your command may not be working for you and offers a robust, portable solution.
tl;dr:
You probably ran your code with sh rather than bash (even though on macOS sh is Bash in disguise), or you had shell option xpg_echo explicitly turned on.
Use printf instead of echo for portability.
In Bash, with the default options and using the echo builtin, your command should work as-is (except that you should double-quote $old_name for robustness), because echo by default does not expand escape sequences such as \t in its operands.
However, Bash's echo can be made to expand control-character escape sequences:
explicitly, by executing shopt -s xpg_echo
implicitly, if you run Bash as sh or with the --posix option (which, among other options and behavior changes, activates xpg_echo)
Thus, your symptom may have been caused by running your code from a script with shebang line #!/bin/sh, for instance.
However, if you're targeting sh, i.e., if you're writing a portable script, then echo should be avoided altogether for the very reason that its behavior differs across shells and platforms - see Ruslan's printf solution.
As an aside: perhaps a more robust approach to your tr command is a whitelisting approach: stating only the characters that are explicitly allowed in your result, and excluding other with the -C option:
old_name='testing\this\folder'
new_name=$(printf '%s' "$old_name" | tr -C '[:alnum:]_-' ' ')
That way, any characters that aren't either letters, numbers, _, or - are replaced with a space.
With Bash, you can use parameter expansion:
$ old_name="testing\this\folder"
$ new_name=${old_name//[<>:\\#%|?*]/ }
$ echo $new_name
testing this folder
For more, please refer to the Bash manual on shell parameter expansion.
I think your test case is missing proper escaping for \, so you're not really testing the case of a backslash contained in a string.
This worked for me:
old_name='testing\\this\\folder'
new_name=$(echo $old_name | tr '<>:\\#%|?*' ' ');
echo $new_name
# testing this folder

How do I write commands that output (file) names with spaces to make them work within backticks?

Ever the lazy unix/linux command line user, I use quite a few little shell scripts to help me avoid typing.
For example, I have a script lst that prints the name of the most recent file in the current directory. If this is called mytext and I type emacs `lst` then mytext will open in emacs
However, if the most recent file is called my text then emacs `lst` will fail, as the shell interprets this command as emacs my text instead of emacs my\ text
Using quotes like in emacs "`lst`" corrects the problem, but uses a whopping two extra keystrokes
Is there any way to modify lst so that the command will work without the extra keystrokes? Outputting a backslash-escaped filename doesn't work.
I use zsh, but the problem (and hopefully the solution) is the same in bash
Never use backticks. You must always quote properly. There are no shortcuts. printf %q is the only alternate that's remotely portable (Zsh's ${(q)var} is similar), but these aren't a good fit for your problem. The correct (only) answer is to quote.
Edit: Maybe you should just stick with zsh for interactive things. I'm not an expert in that shell but I know for instance that it doesn't perform word-splitting or globbing in its default mode without using its special expansions.
$ zsh -c 'x="foo *"; echo $(printf "<%s> " $x)' # two passes of unquoted expansion
<foo *>
$ zsh -c 'emulate sh; x="foo *"; echo $(printf "<%s> " $x)'
<foo> <bltins> <builtins.mm> <COMPATIBILITY> <data> ...
This is a huge win for what you want (and is one of the big things that makes zsh totally incompatible with everything else script-wise). None of the other shells can do this.
How about keybinding your lazy function so you don't type anything, just hit Ctrl+z and zle inserts the text for you right in your command line.
z_lst() {
emulate -L zsh
setopt extendedglob
local lastfile
# you could even implement your whole lst function here instead
lastfile=$(lst)
LBUFFER+=" ${lastfile// /\ }" # this should print the spaces like '\ '.
}
zle -N z_lst && bindkey "^z" z_lst
Add this somewhere appropriate in your .zsh.

echo outputs -e parameter in bash scripts. How can I prevent this?

I've read the man pages on echo, and it tells me that the -e parameter will allow an escaped character, such as an escaped n for newline, to have its special meaning. When I type the command
$ echo -e 'foo\nbar'
into an interactive bash shell, I get the expected output:
foo
bar
But when I use this same command (i've tried this command character for character as a test case) I get the following output:
-e foo
bar
It's as if echo is interpretting the -e as a parameter (because the newline still shows up) yet also it interprets the -e as a string to echo. What's going on here? How can I prevent the -e showing up?
You need to use #!/bin/bash as the first line in your script. If you don't, or if you use #!/bin/sh, the script will be run by the Bourne shell and its echo doesn't recognize the -e option. In general, it is recommended that all new scripts use printf instead of echo if portability is important.
In Ubuntu, sh is provided by a symlink to /bin/dash.
Different implementations of echo behave in annoyingly different ways. Some don't take options (i.e. will simply echo -e as you describe) and automatically interpret escape sequences in their parameters. Some take flags, and don't interpret escapes unless given the -e flag. Some take flags, and interpret different escape sequences depending on whether the -e flag was passed. Some will cause you to tear your hair out if you try to get them to behave in a predictable manner... oh, wait, that's all of them.
What you're probably seeing here is a difference between the version of echo built into bash vs /bin/echo or maybe vs. some other shell's builtin. This bit me when Mac OS X v10.5 shipped with a bash builtin echo that echoed flags, unlike what all my scripts expected...
In any case, there's a solution: use printf instead. It always interprets escape sequences in its first argument (the format string). The problems are that it doesn't automatically add a newline (so you have to remember do that explicitly), and it also interprets % sequences in its first argument (it is, after all, a format string). Generally, you want to put all the formatting stuff in the format string, then put variable strings in the rest of the arguments so you can control how they're interpreted by which % format you use to interpolate them into the output. Some examples:
printf "foo\nbar\n" # this does what you're trying to do in the example
printf "%s\n" "$var" # behaves like 'echo "$var"', except escapes will never be interpreted
printf "%b\n" "$var" # behaves like 'echo "$var"', except escapes will always be interpreted
printf "%b\n" "foo\nbar" # also does your example
Use
alias echo /usr/bin/echo
to force 'echo' invoking coreutils' echo which interpret '-e' parameter.
Try this:
import subprocess
def bash_command(cmd):
subprocess.Popen(['/bin/bash', '-c', cmd])
code="abcde"
// you can use echo options such as -e
bash_command('echo -e "'+code+'"')
Source: http://www.saltycrane.com/blog/2011/04/how-use-bash-shell-python-subprocess-instead-binsh/

Resources