AppleScript : error "sh: lame: command not found" number 127 - macos

I am trying to create an AppleScript with commands below. An issue I am having is there is an error at the third line. I have no problem using the lame command in the terminal directly. In addition, lame is not a native Mac utility; I installed it on my own. Does anybody have a solution?
do shell script "cd ~/Downloads"
do shell script "say -f ~/Downloads/RE.txt -o ~/Downloads/recording.aiff"
do shell script "lame -m m ~/Downloads/recording.aiff ~/Downloads/recording.mp3"
-- error "sh: lame: command not found" number 127
do shell script "rm recording.aiff RE.txt"

To complement Paul R's helpful answer:
The thing to note is that do shell script - regrettably - does NOT see the same $PATH as shells created by Terminal.app - a notable absence is /usr/local/bin.
On my OS X 10.9.3 system, running do shell script "echo $PATH" yields merely:
/usr/bin:/bin:/usr/sbin:/sbin
There are various ways around this:
Use the full path to executables, as in Paul's solution.
Manually prepend/append /usr/local/bin, where many non-system executables live, to the $PATH - worth considering if you invoke multiple executables in a single do shell script command; e.g.:
do shell script "export PATH=\"/usr/local/bin:$PATH\"
cd ~/Downloads
say -f ~/Downloads/RE.txt -o ~/Downloads/recording.aiff
lame -m m ~/Downloads/recording.aiff ~/Downloads/recording.mp3
rm recording.aiff RE.txt"
Note how the above use a single do shell script command with multiple commands in a single string - commands can be separated by newlines or, if on the same line, with ;.
This is more efficient than multiple invocations, though adding error handling both inside the script code and around the do shell script command is advisable.
To get the same $PATH that interactive shells see (except additions made in your bash profile), you can invoke eval $(/usr/libexec/path_helper -s); as the first statement in your command string.
Other important considerations with do shell script:
bash is invoked as sh, which results in changes in behavior, most notably:
process substitution (<(...)) is not available
echo by default accepts no options and interprets escape sequences such as \n.
other, subtle changes in behavior; see http://www.gnu.org/software/bash/manual/html_node/Bash-POSIX-Mode.html
You could address these issues manually by prepending shopt -uo posix; shopt -u xpg_echo; to your command string.
The locale is set to the generic "C" locale instead of to your system's; to fix that, manually prepend export LANG='" & user locale of (system info) & ".UTF-8' to your command string.
No startup files (profiles) are read; this is not surprising, because the shell created is a noninteractive (non-login) shell, but sometimes it's handy to load one's profile by manually by prepending . ~/.bash_profile to the command string; note, however, that this makes your AppleScript less portable.
do shell script command reference: http://developer.apple.com/library/mac/#technotes/tn2065/_index.html

Probably a PATH problem - use the full path for lame, e.g.
do shell script "/usr/local/bin/lame -m m ~/Downloads/recording.aiff ~/Downloads/recording.mp3"

I have been struggling to get the path of an installed BASH command via Applescript for a long time. Using the information here, I finally succeeded.
tell me to set sox_path to (do shell script "eval $(/usr/libexec/path_helper -s); which sox")
Thanks.

Url:http://sourceforge.net/project/showfiles.php?group_id=290&package_id=309
./configure
make install

Related

How to run csh script in bash shell

My default shell is bash in Ubuntu 14.04. I have a csh script file named clean.sh with the following make command:
#! /bin/csh -f
make -f commande.make del
And commande.make has
CKHOME=../CHEMKIN/DATA_BASES
LIN_DATA=${CKHOME}/LIN_FILES/
LINK_CKTP=${CKHOME}/LINK_CKTP_FILES/
#-----------------------------------------------------
include schema_cinetique.make
LINKFILE=${NAME}_LINK
LINKTPFILE=${NAME}_LINKTP
LINKFILE_OLD=${NAME_OLD}_LINK
LINKFILE_NEW=${NAME_NEW}_LINK
#-----------------------------------------------------
cplink :
${COPY} ${LINK_CKTP}${LINKFILE} LINK
cplink2 :
${COPY} ${LINK_CKTP}${LINKFILE} LINKZ1
tplink :
${COPY} ${LINK_CKTP}${LINKTPFILE} LINKTPZ1
calcul :
${COPY} jobtimp1 LJOBNZ1
${COPY} unsteadyf.dat1 DATZ1
del :
${DELETE} LINKZ1 LINKTPZ1 LJOBNZ1 DATZ1 SOLASUZ1
I opened the terminal and moved to the location and tried
./clean.sh
or
csh clean.sh &
or
csh -f clean.sh
Nothing worked.
I got the following line in the terminal,
LINKZ1 LINKTPZ1 LJOBNZ1 DATZ1 SOLASUZ1
make: LINKZ1: Command not found
make: *** [del] Error 127
So, how to run clean.sh file ?
You are confused. The Csh script contains a single command which actually runs identically in Bash.
#!/bin/bash
make -f commande.make del
Or, for that matter, the same with #!/bin/sh. Or, in this individual case, even sh clean.sh, since the shebang is then just a comment, and the commands in the file are available in sh just as well as in csh.
Once make runs, that is what parses and executes the commands in commande.make. make is not a "Fortran command", it is a utility for building projects (but the makefile named commande.make probably contains some instructions for how to compile Fortran code).
In the general case, Csh and Bash are incompatible, but the differences are in the shell's syntax itself (so, the syntax of loops and conditionals, etc, as well as variable assignments and various other shell builtins).
As an aside, Csh command files should probably not have a .sh extension, as that vaguely implies Bourne shell (sh) syntax. File extensions on Unix are just a hint to human readers, so not technically important; but please don't confuse them/us.
(As a further aside, nobody should be using Csh in 2022. There was a time when the C shell was attractive compared to its competition, but that was on the order of 40 years ago.)
The subsequent errors you are reporting seem to indicate that the makefile depends on some utilities which you have not installed. Figuring that out is a significant enough and separate enough question that you should probably ask a new question about that, probably with more debugging details. But in brief, it seems that make needs to be run with parameters to indicate what NAME and COPY (and probably some other variables) should be. Try with make -f commande.make COPY=cp DELETE=rm NAME=foobar for a start, but it's probably not yet anywhere near sufficient.
(I would actually assume that there will be a README file or similar to actually instruct you how to use commande.make since it seems to have some local conventions of its own.)
It seems the script is written having portability in mind, i.e. the name of the cp and rm binaries is kept in variables rather than hard-coding it. My best guess is that this has been done to make it possible to run the script on non UNIX systems, like Windows.
To make it work, export the respective variables before running the script. For the del action you are calling, only the DELETE variable is needed. It should be set to rm which is the command used to remove files on Linux:
export DELETE=rm
./clean.sh
Note: exporting the variable can also be done in one line when invoking the script, by prepending it to the command line:
DELETE=rm ./clean.sh
This behaviour is described in the bash manual:
The environment for any simple command or function may be augmented temporarily by prefixing it with parameter assignments, as described in Shell Parameters. These assignment statements affect only the environment seen by that command.

Should I use a Shebang with Bash scripts?

I am using Bash
$ echo $SHELL
/bin/bash
and starting about a year ago I stopped using Shebangs with my Bash scripts. Can
I benefit from using #!/bin/sh or #!/bin/bash?
Update: In certain situations a file is only treated as a script with the
Shebang, example
$ cat foo.sh
ls
$ cat bar.sh
#!/bin/sh
ls
$ file foo.sh bar.sh
foo.sh: ASCII text
bar.sh: POSIX shell script, ASCII text executable
On UNIX-like systems, you should always start scripts with a shebang line. The system call execve (which is responsible for starting programs) relies on an executable having either an executable header or a shebang line.
From FreeBSD's execve manual page:
The execve() system call transforms the calling process into a new
process. The new process is constructed from an ordinary file, whose
name is pointed to by path, called the new process file.
[...]
This file is
either an executable object file, or a file of data for an interpreter.
[...]
An interpreter file begins with a line of the form:
#! interpreter [arg]
When an interpreter file is execve'd, the system actually execve's the
specified interpreter. If the optional arg is specified, it becomes the
first argument to the interpreter, and the name of the originally
execve'd file becomes the second argument
Similarly from the Linux manual page:
execve() executes the program pointed to by filename. filename must be
either a binary executable, or a script starting with a line of the
form:
#! interpreter [optional-arg]
In fact, if a file doesn't have the right "magic number" in it's header, (like an ELF header or #!), execve will fail with the ENOEXEC error (again from FreeBSD's execve manpage):
[ENOEXEC] The new process file has the appropriate access
permission, but has an invalid magic number in its
header.
If the file has executable permissions, but no shebang line but does seem to be a text file, the behaviour depends on the shell that you're running in.
Most shells seem to start a new instance of themselves and feed it the file, see below.
Since there is no guarantee that the script was actually written for that shell, this can work or fail spectacularly.
From tcsh(1):
On systems which do not understand the `#!' script interpreter conven‐
tion the shell may be compiled to emulate it; see the version shell
variable. If so, the shell checks the first line of the file to see if
it is of the form `#!interpreter arg ...'. If it is, the shell starts
interpreter with the given args and feeds the file to it on standard
input.
From FreeBSD's sh(1):
If the program is not a normal executable file (i.e., if it
does not begin with the “magic number” whose ASCII representation is
“#!”, resulting in an ENOEXEC return value from execve(2)) but appears to
be a text file, the shell will run a new instance of sh to interpret it.
From bash(1):
If this execution fails because the file is not in executable format,
and the file is not a directory, it is assumed to be a shell script, a
file containing shell commands. A subshell is spawned to execute it.
You cannot always depend on the location of a non-standard program like bash. I've seen bash in /usr/bin, /usr/local/bin, /opt/fsf/bin and /opt/gnu/bin to name a few.
So it is generally a good idea to use env;
#!/usr/bin/env bash
If you want your script to be portable, use sh instead of bash.
#!/bin/sh
While standards like POSIX do not guarantee the absolute paths of standard utilities, most UNIX-like systems seem to have sh in /bin and env in /usr/bin.
Scripts should always begin with a shebang line. If a script doesn't start with this, then it may be executed by the current shell. But that means that if someone who uses your script is running a different shell than you do, the script may behave differently. Also, it means the script can't be run directly from a program (e.g. the C exec() system call, or find -exec), it has to be run from a shell.
You might be interested in an early description by Dennis M Ritchie (dmr) who invented the #! :
From uucp Thu Jan 10 01:37:58 1980
.>From dmr Thu Jan 10 04:25:49 1980 remote from research
The system has been changed so that if a file
being executed begins with the magic characters #! , the rest of the
line is understood to be the name of an interpreter for the executed
file. Previously (and in fact still) the shell did much of this job;
it automatically executed itself on a text file with executable mode
when the text file's name was typed as a command. Putting the facility
into the system gives the following benefits.
1) It makes shell scripts more like real executable files, because
they can be the subject of 'exec.'
2) If you do a 'ps' while such a command is running, its real name
appears instead of 'sh'. Likewise, accounting is done on the basis of
the real name.
3) Shell scripts can be set-user-ID.
4) It is simpler to have alternate shells available; e.g. if you like
the Berkeley csh there is no question about which shell is to
interpret a file.
5) It will allow other interpreters to fit in more smoothly.
To take advantage of this wonderful opportunity, put
#! /bin/sh
at the left margin of the first line of your shell scripts. Blanks
after ! are OK. Use a complete pathname (no search is done). At the
moment the whole line is restricted to 16 characters but this limit
will be raised.
Hope this helps
If you write bash scripts, i.e. non portable scripts containing bashisms, you should keep using the #!/bin/bash shebang just to be sure the correct interpreter is used. You should not replace the shebang by #!/bin/sh as bash will run in POSIX mode so some of your scripts might behave differently.
If you write portable scripts, i.e. scripts only using POSIX utilities and their supported options, you might keep using #!/bin/sh on your system (i.e. one where /bin/sh is a POSIX shell).
It you write stricly conforming POSIX scripts to be distributed in various platforms and you are sure they will only be launched from a POSIX conforming system, you might and probably should remove the shebang as stated in the POSIX standard:
As it stands, a strictly conforming application must not use "#!" as the first two characters of the file.
The rationale is the POSIX standard doesn't mandate /bin/sh to be the POSIX compliant shell so there is no portable way to specify its path in a shebang. In this third case, to be able to use the 'find -exec' syntax on systems unable to run a shebangless still executable script, you can simply specify the interpreter in the find command itself, eg:
find /tmp -name "*.foo" -exec sh -c 'myscript "$#"' sh {} +
Here, as sh is specified without a path, the POSIX shell will be run.
The header is useful since it specifies which shell to use when running the script. For example, #!/bin/zsh would change the shell to zsh instead of bash, where you can use different commands.
For example, this page specifies the following:
Using #!/bin/sh, the default Bourne shell in most commercial variants
of UNIX, makes the script portable to non-Linux machines, though you
sacrifice Bash-specific features ...
TL;DR: always in scripts; please not in source'd scripts
Always in your parent
FYI: POSIX compliant is #!/bin/bash, not #!/bin/sh
You want to clarify this so that nothing else overrides the interpreter your script is made for.
You don't want a user at the terminal using zsh to have trouble if your script was written for POSIX bash scripts.
You don't want to run source in your #!/bin/bash unrecognized by #!/bin/sh, someone in an sh terminal have it break the script because it is expecting the simple/POSIX . for including source'd files
You don't want e.g. zsh features - not available in other interpreters - to make their way into your bash code. So, put #!/bin/bash in all your script headers. Then, any of your zsh habits in your script will break so you know to remove them before your roll-out.
It's probably best, especially so POSIX-compliant scripts don't break in a terminal like zsh.
Not expected for included source scripts
FYI: POSIX compliant for sourcing text in a BASH script is ., not source
You can use either for sourcing, but I'll do POSIX.
Standard "shebanging" for all scripting:
parent.sh:
#!/bin/bash
echo "My script here"
. sourced.sh # child/source script, below
sourced.sh:
echo "I am a sourced child script"
But, you are allowed to do this...
sourced.sh: (optional)
#!/bin/bash
echo "I am a sourced child script"
There, the #!/bin/bash "shebang" will be ignored. The main reason I would use it is for syntax highlighting in my text editor. However, in the "proper" scripting world, it is expected that your rolled-out source'd script will not contain the shebang.
In addition to what the others said, the shebang also enables syntax highlighting in some text editors, for example vim.
$SHELL and #!/bin/bash or #!/bin/sh are different.
To start, #!/bin/sh is a symlink to /bin/bash on most Linux systems (on Ubuntu it is now /bin/dash)
But on whether to start with /bin/sh or /bin/bash:
Bash and sh are two different shells. Basically bash is sh, with more
features and better syntax. Most commands work the same, but they are
different.
Just assume if you're writing a bash script, stick with /bin/bash and not /sh because problems can arise.
$SHELL does not necessarily reflect the currently running shell.
Instead, $SHELL is the user's preferred shell, which is typically the
one set in /etc/passwd. If you start a different shell after logging
in, you can not necessarily expect $SHELL to match the current shell
anymore.
This is mine for example, but it could also be /root:/bin/dash or /root:/bin/sh depending on which shell you have input in passwd. So to avoid any problems, keep the passwd file at /bin/bash and then using $SHELL vs. #!/bin/bash wouldn't matter as much.
root#kali:~/Desktop# cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
Sources:
http://shebang.mintern.net/bourne-is-not-bash-or-read-echo-and-backslash/
https://unix.stackexchange.com/questions/43499/difference-between-echo-shell-and-which-bash
http://man.cx/sh
http://man.cx/bash

Understanding script language

I'm a newbie to scripting languages trying to learn bash programming.
I have very basic question. Suppose I want to create three folders like $HOME/folder/
with two child folders folder1 and folder2.
If I execute command in shell like
mkdir -p $HOME/folder/{folder1,folder2}
folder will be created along with child folder.
If the same thing is executed through script I'm not able get expected result. If sample.sh contains
#!/bin/sh
mkdir -p $HOME/folder/{folder1,folder2}
and I execute sh ./sample.sh, the first folder will be created then in that a single {folder1,folder2} directory is created. The separate child folders are not created.
My query is
How the script file works when we compared to as terminal command? i.e., why is it not the same?
How to make it work?
bash behaves differently when invoked as sh, to more closely mimic the POSIX standard. One of the things that changes is that brace expansion (which is absent from POSIX) is no longer recognized. You have several options:
Run your script using bash ./sample.sh. This ignores the hashbang and explicitly uses bash to run the script.
Change the hashbang to read #!/bin/bash, which allows you to run the script by itself (assuming you set its execute bit with chmod +x sample.sh).
Note that running it as sh ./sample.sh would still fail, since the hashbang is only used when running the file itself as the executable.
Don't use brace expansion in your script. You could still use as a longer method for avoiding duplicate code:
for d in folder1 folder2; do
mkdir -p "$HOME/folder/$d"
done
Brace expansion doesn't happen in sh.
In sh:
$ echo {1,2}
produces
{1,2}
In bash:
$ echo {1,2}
produces
1 2
Execute your script using bash instead of using sh and you should see expected results.
This is probably happening because while your tags indicate you think you are using Bash, you may not be. This is because of the very first line:
#/bin/sh
That says "use the system default shell." That may not be bash. Try this instead:
#!/usr/bin/env bash
Oh, and note that you were missing the ! after #. I'm not sure if that's just a copy-paste error here, but you need the !.

Using aliases with nohup

Why doesn't the following work?
$ alias sayHello='/bin/echo "Hello world!"'
$ sayHello
Hello world!
$ nohup sayHello
nohup: appending output to `nohup.out'
nohup: cannot run command `sayHello': No such file or directory
(the reason I ask this question is because I've aliased my perl and python to different perl/python binaries which were optimized for my own purposes; however, nohup gives me troubles if I don't supply full path to my perl/python binaries)
Because the shell doesn't pass aliases on to child processes (except when you use $() or ``).
$ alias sayHello='/bin/echo "Hello world!"'
Now an alias is known in this shell process, which is fine but only works in this one shell process.
$ sayHello
Hello world!
Since you said "sayHello" in the same shell it worked.
$ nohup sayHello
Here, a program "nohup" is being started as a child process. Therefore, it will not receive the aliases.
Then it starts the child process "sayHello" - which isn't found.
For your specific problem, it's best to make the new "perl" and "python" look like the normal ones as much as possible. I'd suggest to set the search path.
In your ~/.bash_profile add
export PATH="/my/shiny/interpreters/bin:${PATH}"
Then re-login.
Since this is an environment variable, it will be passed to all the child processes, be they shells or not - it should now work very often.
For bash: Try doing nohup 'your_alias'. It works for me. I don't know why back quote is not shown. Put your alias within back quotes.
With bash, you can invoke a subshell interactively using the -i option. This will source your .bashrc as well as enable the expand_aliases shell option. Granted, this will only work if your alias is defined in your .bashrc which is the convention.
Bash manpage:
If the -i option is present, the shell is interactive.
expand_aliases: If set, aliases are expanded as described above under ALIASES. This option is enabled by default for interactive shells.
When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc and ~/.bashrc, if these files exist.
$ nohup bash -ci 'sayHello'
If you look at the Aliases section of the Bash manual, it says
The first word of each simple command, if unquoted, is checked to see
if it has an alias.
Unfortunately, it doesn't seem like bash has anything like zsh's global aliases, which are expanded in any position.

Why can't I call `history` from within Ruby?

I can run Bash shell commands from with a Ruby program or irb using backticks (and %x(), system, etc). But that does not work with history for some reason.
For example:
jones$ irb --simple-prompt
>> `whoami`
=> "jones\n"
>> `history`
(irb):2: command not found: history
=> ""
From within a Ruby program it produces this error:
/usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31: command not found: history
In bash itself, those commands work fine
It's not that the Ruby call is invoking a new shell - it simply does not find that command...
Anyone know why? I'm stumped...
Most unix commands are implemented as executable files, and the backtick operator gives you the ability to execute these commands from within your script. However, some commands that are interpreted by bash are not executable files; they are features built-in to the bash command itself. history is one such command. The only way to execute this command is to first execute bash, then ask it to run that command.
You can use the command type to tell you the type of a particular command in order to know if you can exec it from a ruby (or python, perl, Tcl, etc script). For example:
$ type history
history is a shell builtin
$ type cat
cat is /bin/cat
You'll also find that you can't exec aliases defined in your .bashrc file either, since those aren't executable files either.
It helps to remember that exec'ing a command doesn't mean "run this shell command" but rather "run this executable file". If it's not an executable file, you can't exec it.
It's a built-in. In general, you can run built-ins by manually calling the shell:
`bash -c 'history'`
However, in this case, that will probably not be useful.
{~} ∴ which history
history: shell built-in command

Resources