I'm making a bash script and I'd like to make sure it's portable. For context, the command will be the part of tmux-resurrect plugin.
I want to use this command: ps -eo ppid,command. Is that command portable?
I'd also be glad to hear how to check that myself. For example: maybe there's a service that can test commands on the large number of operating systems etc?
The POSIX Standard is publicly available on the web. Yes, ps is one of the standardized utilities. If you stick to the standard options, you should be pretty portable.
Note however, that forcing some utilities to behave posixly correct, setting certain environment variables might be necessary. In particular, systems using the GNU utilities may need POSIXLY_CORRECT=yes or similar being set.
Related
TL;DR: Whats the most optimal way to write portable general-purpose automation scripts for Windows, Mac and Linux?
Longer version:
I work with different platforms and often write shell scripts to automate things (run programs and other scripts, manipulate files and directories, etc).
The problem is that sh/bash substitutes on Windows are tricky, complex, often incompatible or lack some native unix tools. And cygwin scares a regular user, in case when I share some of my scripts with the others.
I find that .bat is very limited and ugly. And I didn't use Powershell a lot, but it looks a bit overcomplicated to me (or should I just give it another try?).
What would you recommend to do in such case? Have you had similar challenges, how did you solve them?
I would advice you use some configuration manager as Ansible, Puppet or Chef. Since their sole purpose is to automate things, and some of them are cross platform. Google each one I mentioned, scripts are generally easy to write in them and they will work on all the platforms, but you will need to install the manager itself on each platform, which can be achieved with init.sh or with a simple powershell script.
I am writing several shell scripts using Ubuntu/bash and I would like to ensure that they are portable to OSX.
I have previously had trouble when I tried to use non-portable behavior of certain commands. Is there anything like an emulator for another shell environment?
I'm looking for an option besides just researching the portability of each command that I use.
Terminal.app is just a GUI, like xterm. It doesn't execute scripts. OS X uses bash, just like Ubuntu. It may, however, use a different version. For instance, OS X 10.9 uses bash version 3.2.51.
What you're describing is not Terminal and probably isn't bash. It's probably "the entirety of the command line tools that are installed by default." Things like grep, sed, and cut, and in practice you mean "the entire OS." There is no environment other than the OS that is going to capture all of those. Even if they did, you'd still need to worry about numerous other portability concerns like whether there is /proc filesystem (there isn't one on OS X).
Do you really mean to suggest that it only has to run on Ubuntu and OS X? FreeBSD is quite different. And there are many platforms that don't include all the GNU extensions that are common on Linux. In principle you could write to the POSIX standard, which they are all supposed to follow, but that won't really take you that far. In practice, the only way to know that you're portable to a platform is to test it on that platform.
But short version: no. You have to research first. And then you have to actually test it on each version of each platform you support.
There is another option though: don't use bash and don't use the low-level command line tools like grep. Use a higher-level language that you know will be on the target platform like Python, Perl, or Ruby. Then you just have to work to an old enough version of these languages and stay within the standard library. That's typically much easier to keep portable than bash scripts.
Whenever I write shell scripts (mostly software development utilities or build tools) I've generally tried to avoid using bash in favor of using plain old sh for portability. However lately I've been running into more and more issues where useful features are not available, or behavior is actually less consistent across systems using sh then it is using bash, since sh is aliased to different shells...
As I understand it, sh is the oldest Unix shell and carefully written sh scripts should in theory run on pretty much any system out there... but it also seems there are about 9000 different variants of every major shell, too. Doesn't using bash as your script interpreter effectively limit your script's portability? Sure, no problems on OS X or pretty much any Linux out there, but what about the BSDs? Solaris, AIX, HP-UX? What do you do if you really want to run on everything?
I know bash can be installed on virtually any OS but it is really a first class citizen on all relevant modern systems? Does it come pre-installed? I'm just not really sure whether it's best to avoid or embrace bash with the intent of having the most consistent and portable overall experience.
What do you do if you really want to run on everything?
You follow the POSIX standard for sh (and the tools you're calling) and hope that the target OS does so too. Any modern product called "UNIX" must follow this standard, and customarily (though not universally), the standard shell will be called /bin/sh. The BSDs and Linux distros tend to aim at POSIX compatibility as well.
Doesn't using bash as your script interpreter effectively limit your script's portability?
Yes, but it depends on your target audience as you noted. If it's a short script, it's worth testing under dash (Ubuntu and Debian's default shell) for POSIX compatibility.
Whenever I start thinking about portability issues in my shell script, I switch to another language. Perl is widely available and generally a good choice for scripts, but if your tools are to be consumed by Python, Ruby, $lang developers, use $lang to its full potential.
bash itself is just a plain C program, does not need special authority to run, can be put in any location. You can easily build it from source. Basically, you can run bash if you need to and doesn't need the administrator of the system to install it.
As long as it is in your path, you can always code your script with the line.
#!/usr/bin/env bash
I know this question has kind-a started "religious" wars in past and there might not be one correct answer. But after working with ksh and csh for last 3-4 years and going through the pain of porting from one to another or applying a common piece of logic to multiple versions (read as legacy code), if I am writing a new script, I would go for ksh, but out of compulsion rather than choice. Is there a better option other than ksh/csh? Also something that is portable across Unixes (Solaris/HP/IBM/FreeBSD) and Linux (and if I am not asking too much or it if does make sense all Linux flavors)
Waiting for suggestions ...
Peace :)
Devang Kamdar
I would suggest plain old sh, which is available everywhere.
Also, it is worth noting that portability involves not only shell but also other commands used in a script such as awk, grep, ps or echo.
If you really want it to be portable (I don't know that any shell-script is maintainable), I would specify #!/bin/sh and test with dash and if possible other shells.
I would expect BASH to be the widest spread shell at the moment since it is the default for many Linux distributions (it can even run on Windows with cygwin, but that's probably true for the other shells, too).
An alternative might be to not use the shell itself for scripting but one of the scriping languages out there like perl, python, ruby, ...
I usually use ksh. I find that it's a good compromise between features and portability. It's there (or a compatible version is available) on most Linux boxes and Solaris. It's a while since I used HP-UX (thankfully) but I'm pretty sure it was available there too.
If all the machines you need to support are modern, bash might be an option. Solaris 10 comes with a copy. It's the default on most Linux machines.
Your lowest common denominator is going to be Bourne (sh), so that's worth considering if portability is your main concern. It's missing some of the more friendly features of ksh and bash though.
It's still worth steering clear of csh/tcsh for scripting. Csh Programming Considered Harmful is an oldie but still largely relevant.
My answer would be perl.
Does everything 'sh' 'bash' etc can do in a nicer more elegant manner.
Also it is actually more portable. A given version perl is very consistant accross all platforms. There are no significant differences between the Linux, Solaris and AIX distributions whereas porting shell scripts between these platforms is a real pain.
And it works on all windows paltforms! Provided you avoid backticks and "system()" your script has a good chance of running.
Python! Check out iPython, which is an enhanced Python interpreter. Also: Python for Unix and Linux System Adminitration.
You can write great portable scripts, and it's fun.
I have a few of our senior QA engineers in town for a few days and I am in the process of prepping them for testing an app that we are porting to Linux and OS X. These guys are smart. While they are not programmers they do understand things like how to open memory dumps to find the function pointer, and write simple python to help automate their job. But they have always used windows, and are only familiar with the tools there.
So my question is: What would you teach them to help test a native application running on OS X or Linux?
A few ideas I had were:
Basics of the file system, where config files are (/etc) where log files are (/var/log)
How to use locate, find, grep and co.
Using gdb to examine coredumps
If they are not afraid of gdb and analyzing core dumps then they should definitely know about valgrind.
Knowing how to do system call tracing and library function call tracing is very helpful, too.
If they need to kill http://en.wikipedia.org/wiki/Kill_(command) a renegade process knowing about signals http://en.wikipedia.org/wiki/Signal_(computing) helps.
If they need to convert text files between Windows and Linux the tr command http://en.wikipedia.org/wiki/Tr_(Unix) is their friend.
If they need to download files wget is an easy to use commandline tool.
Overall a decent knowledge about the most commonly used Linux shell bash should be a fundamental requirement.
There is a (very basic) Windows to Linux: A Beginner's Guide that may help to overcome the initial hurdles. Some more articles are here.
Here's A beginner’s introduction to the GNU/Linux command line
First two things that come to mind
Learn the shell (sh, ksh, bash or whatever they are going to use)
Learn how to use an editor (vi/m, emacs, pico even?)
I would teach tham how to set ulimit so that core files can be created. I might also include information on basic signal numbers and what those might mean. You might in addition give them an overview of ftp to move files off to where they are more comfortable as well the basics of CR LF issues. I would explain to them the primary differences between UNIX and Windows (the slashes are different) . I would also consider setting up a samba share so that they can use the tools of their choice to edit files.
Teaching them how to redirect output and how to use tee is probably something they might benefit from. The basics of file permissions is a must. Explaining that ssh and telnet are available to access those remote boxes might help if telnet port is disabled. Finally I would teach them that removing a file has no undo feature as windows has.
You might consider explaining ps -ef as well as simple pipes and grep. I would show them how to background processes and maybe kill process with kill -9. Tools such as top, xload, and pstree might help them out.
I would teach them to use simple unix tools, such as time/ sed / grep / maybe even perl. Shell scripting and the "many simple commands" philosophy.
On the other side, learn how to use more complex tools such as
valgrind
gdb
strace
etrace
EDIT: Of course, some text editor (vim/emacs/mcedit/etc) is needed.