Unable to execute shell script in Cygwin as a KornShell script - windows

I rarely touch shell scripts, we have another department who write them, so I have an understanding of writing them but no experience. However they all appear rather useless with my issue.
I am trying to execute some KornShell (ksh) scripts on a windows based machine using Cygwin- we use these to launch our Oracle WebLogic servers, now it simply will not execute. I used to be able to execute these exact same scripts fine on my old machine.
Now I have narrowed this down to the fact the 'magic number' or whatever it is at the start of the script where it specifies the script interpreter path:
i.e.:
#!/bin/ksh
if I change it to execute as a simple bash it works i.e:
#!/bin/sh
I went through checking the packages installed for cygwin - now the shells I installed are:
mksh MirdBSD KornShell
bash the bourne again shell
zsh z shell
Should I expect to see a ksh.exe in my cygwin/bin directory? there is a system file 'ksh' which I was making an assume somehow associates it with one of the other shell exes, like mksh.exe
I understand my explanation may well be naff. But that being said, any help would be very much appreciated.
Thanks.

I believe the MirBSD korn shell is called mksh. You can verify this and look for the correct path by typing
% which mksh
% which ksh
or if you have no which,
% type -p mksh
% type -p ksh
or if that fails too, check /etc/shells which should list all valid shells on a system:
% grep ksh /etc/shells
You need to put the full path after the #! line. It will probably be /bin/mksh, so your line needs to look like:
#!/bin/mksh

You've probably fixed it by now, but the answer was no, your Cygwin does not (yet) know about ksh.
I solved this problem by launching the cygwin setup in command-line mode with the -P ksh attribute (as described in http://www.ehow.com/how_8611406_install-ksh-cygwin.html).

You can run a ksh using a bat file
C:\cygwin\bin\dos2unix kshfilename.ksh
C:\cygwin\bin\bash kshfilename.ksh
Running a shell script through Cygwin on Windows

Install KornShell (ksh) into Cygwin by the following process:
Download: ksh.2012-08-06.cygwin.i386.gz
Install ksh via Cygwin setup.
Execture Cygwin setup.exe
Choose: Install from Local Directory
Select the ksh.2012-08-06.cygwin.i386.gz as the Local Package Directory.
Complete Cygwin setup.
Restart Cygwin.

Related

Why is #!/usr/bin/env bash superior to #!/bin/bash?

I've seen in a number of places, including recommendations on this site (What is the preferred Bash shebang?), to use #!/usr/bin/env bash in preference to #!/bin/bash. I've even seen one enterprising individual suggest using #!/bin/bash was wrong and bash functionality would be lost by doing so.
All that said, I use bash in a tightly controlled test environment where every drive in circulation is essentially a clone of a single master drive. I understand the portability argument, though it is not necessarily applicable in my case. Is there any other reason to prefer #!/usr/bin/env bashover the alternatives and, assuming portability was a concern, is there any reason using it could break functionality?
#!/usr/bin/env searches PATH for bash, and bash is not always in /bin, particularly on non-Linux systems. For example, on my OpenBSD system, it's in /usr/local/bin, since it was installed as an optional package.
If you are absolutely sure bash is in /bin and will always be, there's no harm in putting it directly in your shebang—but I'd recommend against it because scripts and programs all have lives beyond what we initially believe they will have.
The standard location of bash is /bin, and I suspect that's true on all systems. However, what if you don't like that version of bash? For example, I want to use bash 4.2, but the bash on my Mac is at 3.2.5.
I could try reinstalling bash in /bin but that may be a bad idea. If I update my OS, it will be overwritten.
However, I could install bash in /usr/local/bin/bash, and setup my PATH to:
PATH="/usr/local/bin:/bin:/usr/bin:$HOME/bin"
Now, if I specify bash, I don't get the old cruddy one at /bin/bash, but the newer, shinier one at /usr/local/bin. Nice!
Except my shell scripts have that !# /bin/bash shebang. Thus, when I run my shell scripts, I get that old and lousy version of bash that doesn't even have associative arrays.
Using /usr/bin/env bash will use the version of bash found in my PATH. If I setup my PATH, so that /usr/local/bin/bash is executed, that's the bash that my scripts will use.
It's rare to see this with bash, but it is a lot more common with Perl and Python:
Certain Unix/Linux releases which focus on stability are sometimes way behind with the release of these two scripting languages. Not long ago, RHEL's Perl was at 5.8.8 -- an eight year old version of Perl! If someone wanted to use more modern features, you had to install your own version.
Programs like Perlbrew and Pythonbrew allow you to install multiple versions of these languages. They depend upon scripts that manipulate your PATH to get the version you want. Hard coding the path means I can't run my script under brew.
It wasn't that long ago (okay, it was long ago) that Perl and Python were not standard packages included in most Unix systems. That meant you didn't know where these two programs were installed. Was it under /bin? /usr/bin? /opt/bin? Who knows? Using #! /usr/bin/env perl meant I didn't have to know.
And Now Why You Shouldn't Use #! /usr/bin/env bash
When the path is hardcoded in the shebang, I have to run with that interpreter. Thus, #! /bin/bash forces me to use the default installed version of bash. Since bash features are very stable (try running a 2.x version of a Python script under Python 3.x) it's very unlikely that my particular BASH script will not work, and since my bash script is probably used by this system and other systems, using a non-standard version of bash may have undesired effects. It is very likely I want to make sure that the stable standard version of bash is used with my shell script. Thus, I probably want to hard code the path in my shebang.
There are a lot of systems that don't have Bash in /bin, FreeBSD and OpenBSD just to name a few. If your script is meant to be portable to many different Unices, you may want to use #!/usr/bin/env bash instead of #!/bin/bash.
Note that this does not hold true for sh; for Bourne-compliant scripts I exclusively use #!/bin/sh, since I think pretty much every Unix in existence has sh in /bin.
For invoking bash it is a little bit of overkill. Unless you have multiple bash binaries like your own in ~/bin but that also means your code depends on $PATH having the right things in it.
It is handy for things like python though. There are wrapper scripts and environments which lead to alternative python binaries being used.
But nothing is lost by using the exact path to the binary as long as you are sure it is the binary you really want.
#!/usr/bin/env bash
is definitely better because it finds the bash executable path from your system environment variable.
Go to your Linux shell and type
env
It will print all your environment variables.
Go to your shell script and type
echo $BASH
It will print your bash path (according to the environment variable list) that you should use to build your correct shebang path in your script.
I would prefer wrapping the main program in a script like below to check all bash available on system. Better to have more control on the version it uses.
#! /usr/bin/env bash
# This script just chooses the appropriate bash
# installed in system and executes testcode.main
readonly DESIRED_VERSION="5"
declare all_bash_installed_on_this_system
declare bash
if [ "${BASH_VERSINFO}" -ne "${DESIRED_VERSION}" ]
then
found=0
all_bash_installed_on_this_system="$(\
awk -F'/' '$NF == "bash"{print}' "/etc/shells"\
)"
for bash in $all_bash_installed_on_this_system
do
versinfo="$( $bash -c 'echo ${BASH_VERSINFO}' )"
[ "${versinfo}" -eq "${DESIRED_VERSION}" ] && { found=1 ; break;}
done
if [ "${found}" -ne 1 ]
then
echo "${DESIRED_VERSION} not available"
exit 1
fi
fi
$bash main_program "$#"
Normally #!path/to/command will trigger bash to prepend the command path to the invoking script when executed. Example,
# file.sh
#!/usr/bin/bash
echo hi
./file.sh will start a new process and the script will get executed like /bin/bash ./file.sh
Now
# file.sh
#!/usr/bin/env bash
echo hi
will get executed as /usr/bin/env bash ./file.sh which quoting from the man page of env describes it as:
env - run a program in a modified environment
So env will look for the command bash in its PATH environment variable and execute in a separate environment where the environment values can be passed to env like NAME=VALUE pair.
You can test this with other scripts using different interpreters like python, etc.
#!/usr/bin/env python
# python commands
Your question is biased because it assumes that #!/usr/bin/env bash is superior to #!/bin/bash. This assumption is not true, and here's why:
env is useful in two cases:
when there are multiple versions of the interpreter that are incompatible.
For example python 2/3, perl 4/5, or php 5/7
when the location depends on the PATH, for instance with a python virtual environment.
But bash doesn't fall under any of these two cases because:
bash is quite stable, especially on modern systems like Linux and BSD which form the vast majority of bash installations.
there's typically only one version of bash installed under /bin.
This has been the case for the past 20+ years, only very old unices (that nobody uses any longer) had a different location.
Consequently going through the PATH variable via /usr/bin/env is not useful for bash.
Add to these three good resons to use #!/bin/bash:
for system scripts (when not using sh) for which the PATH variable may not contain /bin.
For example cron defaults to a very strict PATH of /usr/bin:/bin which is fine, sure, but other context/environments may not include /bin for some peculiar reason.
when the user screwed-up his PATH, which is very common with beginners.
for security when for example you're calling a suid program that invokes a bash script. You don't want the interpreter to be found via the PATH variable which is entirely under the user's control!
Finally, one could argue that there is one legitimate use case of env to spawn bash: when one needs to pass extra environment variables to the interpreter using #!/usr/bin/env -S VAR=value bash.
But this is not a thing with bash because when you're in control of the shebang, you're also in control of the whole script, so just add VAR=value inside the script instead and avoid the aforementioned problems introduced by env with bash scripts.

Tcl script cannot be executed from bash shell script

I have a relatively strange problem with bash shell scripts and tcl scripts, invoked from within the shell scripts.
In perspective, I have a shell script that generates some other shell scripts, as well as tcl scripts. Some of the generated shell scripts invoke tcl scripts with tclsh command.
The files are created and stored in a directory, also created by the initial shell scripts, that generates the files and the folders where these are to be stored.
The problem is with the generated shell scripts, which invoke tclsh to run a tcl script. Even if the files are generated and the shell scripts have the permissions to be executed, the response from the shell is that the tcl file embedded in the shell script cannot be found.
However, the file exists and I can open it with both vi and gedit, RHEL 9.0 or Centos 5.7 platforms. But, when I take the same shell script out of the created directory, this error does not appear. Can you please suggest any idea? I checked also directory permissions, but they seem ok. I also checked the shell script for extra characters, but I did not find anything.
It's hard to tell what exactly is going wrong from your description; you leave out all the information actually required to diagnose the problem precisely. However…
You have a tclsh on your PATH, but your script isn't running despite being chmodded to be executable? That means that there's a problem with your #! line. Current best practice is that you use something like this:
#!/usr/bin/env tclsh
That will search your PATH for tclsh and use that, and it's so much easier than any of the alternative contortions.
The other thing that might be causing a problem is if your Tcl program contains:
package require Tcl 8.5
And yet the version of Tcl used by the tclsh on the path is 8.4. I've seen this a number of times, and if that's your problem you need to make sure that the right Tcl rpm is installed and to update your #! line to this:
#!/usr/bin/env tclsh8.5
Similarly for Tcl 8.6, but in that case you might need to build your own from source as well and install that in a suitable location. (Tcl 8.6 is still only really for people who are specialists.)
(The issue is that RHEL — and Centos too, which tracks RHEL — is very conservative when it comes to Tcl. The reasons for this aren't really germane to this answer though.)
Could the problem be as simple as the fact that you don't have "." in your PATH? What if instead of calling your script like "myscript.tcl" you call it like "./myscript.tcl" or "/absolute/path/to/myscript.tcl"?

Activating a VirtualEnv using a shell script doesn't seem to work

I tried activating a VirtualEnv through a shell script like the one below but it doesn't seem to work,
#!/bin/sh
source ~/.virtualenvs/pinax-env/bin/activate
I get the following error
$ sh virtualenv_activate.sh
virtualenv_activate.sh: 2: source: not found
but if I enter the same command on terminal it seems to work
$ source ~/.virtualenvs/pinax-env/bin/activate
(pinax-env)gautam#Aspirebuntu:$
So I changed the shell script to
#!/bin/bash
source ~/.virtualenvs/pinax-env/bin/activate
as suggested and used
$ bash virtualenv_activate.sh
gautam#Aspirebuntu:$
to run the script .
That doesn't throw an error but neither does that activate the virtual env
So any suggestion on how to solve this problem ?
PS : I am using Ubuntu 11.04
TLDR
Must run the .sh script with source instead of the script solely
source your-script.sh
and not
your-script.sh
Details
sh is not the same as bash (although some systems simply link sh to bash, so running sh actually runs bash). You can think of sh as a watered down version of bash. One thing that bash has that sh does not is the "source" command. This is why you're getting that error... source runs fine in your bash shell. But when you start your script using sh, you run the script in an shell in a subprocess. Since that script is running in sh, "source" is not found.
The solution is to run the script in bash instead. Change the first line to...
#!/bin/bash
Then run with...
./virtualenv_activate.sh
...or...
/bin/bash virtualenv_activate.sh
Edit:
If you want the activation of the virtualenv to change the shell that you call the script from, you need to use the "source" or "dot operator". This ensures that the script is run in the current shell (and therefore changes the current environment)...
source virtualenv_activate.sh
...or...
. virtualenv_activate.sh
As a side note, this is why virtualenv always says you need to use "source" to run it's activate script.
source is an builtin shell command in bash, and is not available in sh. If i remember correctly then virtual env does a lot of path and environment variables manipulation. Even running it as bash virtualenv_blah.sh wont work since this will simply create the environment inside the sub-shell.
Try . virtualenv_activate.sh or source virtualenv_activate.sh this basically gets the script to run in your current environment and all the environment variables modified by virtualenv's activate will be available.
HTH.
Edit: Here is a link that might help - http://ss64.com/bash/period.html
On Mac OS X your proposals seems not working.
I have done it this way. I'am not very happy with solution, but share it anyway here and hope, that maybe somebody will suggest the better one:
In activate.sh I have
echo 'source /Users/andi/.virtualenvs/data_science/bin/activate'
I give execution permissions by: chmod +x activate.sh
And I execute this way:
`./activate.sh`
Notice that there are paranthesis in form of ASCII code 96 = ` ( Grave accent )
For me best way work as below.
Create start-my-py-software.sh and pest below code
#!/bin/bash
source "/home/snippetbucket.com/source/AIML-Server-CloudPlatform/bin/activate"
python --version
python /home/snippetbucket.com/source/AIML-Server-CloudPlatform/main.py
Give file permission to run like below.
chmod +x start-my-py-software.sh
Now run like below
.start-my-py-software.sh
and that's it, start my python based server or any other code.
ubuntu #18.0
In my case, Ubuntu 16.04, the methods above didn't worked well or it needs much works.
I just made a link of 'activate' script file and copy it to home folder(or $PATH accessible folder) and renamed it simple one like 'actai'.
Then in a terminal, just call 'source actai'. It worked!

How do I tell what type my shell is

How can I tell what type my shell is? ie, whether it's traditional sh, bash, ksh, csh, zsh etc.
Note that checking $SHELL or $0 won't work because $SHELL isn't set by all shells, so if you start in one shell and then start a different one you may still have the old $SHELL.
$0 only tells you where the shell binary is, but doesn't tell you whether /bin/sh is a real Bourne shell or bash.
I presume that the answer will be "try some features and see what breaks", so if anyone can point me at a script that does that, that'd be great.
This is what I use in my .profile:
# .profile is sourced at login by sh and ksh. The zsh sources .zshrc and
# bash sources .bashrc. To get the same behaviour from zsh and bash as well
# I suggest "cd; ln -s .profile .zshrc; ln -s .profile .bashrc".
# Determine what (Bourne compatible) shell we are running under. Put the result
# in $PROFILE_SHELL (not $SHELL) so further code can depend on the shell type.
if test -n "$ZSH_VERSION"; then
PROFILE_SHELL=zsh
elif test -n "$BASH_VERSION"; then
PROFILE_SHELL=bash
elif test -n "$KSH_VERSION"; then
PROFILE_SHELL=ksh
elif test -n "$FCEDIT"; then
PROFILE_SHELL=ksh
elif test -n "$PS3"; then
PROFILE_SHELL=unknown
else
PROFILE_SHELL=sh
fi
It does not make fine distinctions between ksh88, ksh95, pdksh or mksh etc., but in more than ten years it has proven to work for me as designed on all the systems I were at home on (BSD, SunOS, Solaris, Linux, Unicos, HP-UX, AIX, IRIX, MicroStation, Cygwin.)
I don't see the need to check for csh in .profile, as csh sources other files at startup.
Any script you write does not need to check for csh vs Bourne-heritage because you explicitly name the interpreter in the shebang line.
Try to locate the shell path using the current shell PID:
ps -p $$
It should work at least with sh, bash and ksh.
If the reason you're asking is to try to write portable shell code, then spotting the shell type, and switching based on it, is an unreliable strategy. There's just too much variation possible.
Depending on what you're doing here, you might want to look at the relevant part of the autoconf documentation. That includes an interesting (and in some respects quite dismal) zoology of different shell aberrations.
For the goal of portable code, this section should be very helpful. If you do need to spot shell variants, then there might be some code buried in autoconf (or at least in one of the ./configure scripts it generates) which will help with the sniffing.
You can use something like this:
shell=`cat /proc/$$/cmdline`
Oh, I had this problem. :D
There is a quick hack, use ps -p $$ command to list the process with PID of the current running process -- which is your SHELL. This returns a string table structure, if you want, you can AWK, or SED the shell out...
The system shell is the thing you see when you open up a fresh terminal window which is not set to something other than bash (assuming this is your default SHELL).
echo $SHELL
Generally, you can find out all the constants defined by running
set
If the output is a lot of stuff then run
set | less
so you can scroll it from the top of the command line or
set > set.txt
To save the output to a file.
Invoking a different interactive shell to bash in your terminal does not mean that your system shell gets changed to something else i.e. your system shell is set to bash although you invoke a csh shell from a bash shell just that one session.
The above means that typing /bin/csh or /bin/python in bash or whatever does not set the system shell to the shell you invoked, at all.
If you really want to see the SHELL constant change then you need to set it to something else. If successful you should see the new shell whenever you open a fresh terminal...
It's old thread but...
In GNU environment You can sh --help and get something like
BusyBox v1.23.2 (2015-04-24 15:46:01 GMT) multi-call binary.
Usage: sh [-/+OPTIONS] [-/+o OPT]... [-c 'SCRIPT' [ARG0 [ARGS]] / FILE [ARGS]]
Unix shell interpreter
So, the first line is shell type =)

Need to write a program to sanely configure a login shell

I just started using a Solaris 10 (Sparc) box where I telnet in and get confronted with a very unfriendly interface (compared to the standard bash shell I use in cygwin or linux) --- the arrow keys do not work as I expect them to. Being an NIS system, changing the shell is not as easy as using the "chsh" command. And setting the SHELL environment variable in ~/.login and ~/.profile is not working for me. So I'm thinking that I may need to write a script to determine if bash is running the script and starting bash if the answer is no. My first attempt, trying to invoke /bin/bash from ~/.profile seems to work but kind of doesn't feel right. Other suggestions? And how do I tell programmatically which shell is actually executing?
You can tell what shell is running with echo $0. For example:
$ echo $0
-bash
If you're changing shell you probably want to replace the current shell process rather than be a child of it, so use exec.
Also, you want to pass bash the -l flag so it acts as if it has been called as part of the login process.
So you'll want something like:
exec bash -l
You are probably running with ksh(1) on Solaris. You have several options, read the manpage for ksh and configure it or install another shell you're more familiar with like bash. I'd personnaly recommend zsh.

Resources