csh script not finding executable - shell

For the current project, I need to run the GENESIS genetic algorithm program, and the professor has provided a csh script that allows us to easily pass in the fitness function as well as external initilization and template files.
The script calls the makefile to build the executable, adding the fitness function to the mix and produces an executable ga.FIT, where FIT is the name of the finess function source file.
On the machines at school runnung Ubuntu 10.04, there is no problem whatsoever running this script. However, when I try to run it on my machine, I get the following output:
./go cancer2 ex0
Note: Genesis files modified for use on USM Linux cluster
Note2: ga.cancer2 is your executable (e.g., if you need to use the debugger)
making executables ...
make: `ga.cancer2' is up to date.
make: `report' is up to date.
running ga.cancer2 ex0 ...
ga.cancer2: Command not found.
But the executable IS there! I can manually call it separately via ga.cancer2 ex0 and it runs at both the csh and bash prompts. I've verified its not a permissions issue as the equivalent of chmod 755 has been set to the executable.
Is this something specific to csh, and should I look into modifying the script for bash, or stick to remoting in to the school system?

Perhaps you need to add . to your $PATH.
And once you've got your exam, tell your professor about the famous C-shell considered harmful paper, and suggest him to read the Wikipedia "Considered Harmful" page.

It looks like ga.cancer2 is in your current directory. Basile's answer should work, but it's probably a better idea to modify the script so it invokes ./ga.cancer2 rather than ga.cancer2.
In general, having . in your $PATH is a potential security risk (regardless of which shell you're using). Imagine cding into a directory in which someone has planted an ls command that does something evil. If you make sure . isn't in your $PATH (and get into the habit of typing ./command to execute a command in your current directory), you avoid this risk.
Having . at the end of $PATH is less risky -- but since the most common name for a test program is test, and test will invoke /bin/test, the ./command habit is still a good one.
And Basile has a good point that csh is not the best shell for writing scripts -- but from the looks of the output, the script you're running is probably simple enough that it doesn't make much difference. Still, good habits and all that.

Related

Mistake this is a duplicate [duplicate]

This question already has answers here:
How to obtain the first letter in a Bash variable?
(7 answers)
Closed 3 years ago.
I am trying to my a custom terminal command. I just learned I am supposed to do it using the Unix script? I don't really know much of what that is and am still trying to figure it out. What I do know is that $1 is an arg is it possible to make it a variable and then get the first letter like you could in python?
EX:
str = 'happy'
str[0] = 'h'
You're asking a few different things here.
I am trying to my a custom terminal command.
That could mean a few different things, but the most obvious meaning is that you want to add an executable to your path so that when you type it at the terminal, it runs just like any other executable on your system. This requires just a few things:
the executable permission must be set.
the file must specify how it can be executed. For interpreted programs such as bash scripts or python scripts, you can do so by beginning the file with a "shebang line" that specifies the interpreter for the file.
the file must be in one of the locations specified by your $PATH.
I just learned I am supposed to do it using the Unix script?
there's no such thing as a "unix script", but what you seem to be referring to is a "shell script". Though these are commonly associated with unix, they're no more inherently a unix script than any other language. A shell, such as bash, sh, or any other, is just an interpreted language that is designed so that it is convenient to be used interactively by a human as well as being programmatically executed as part of a saved file.
I don't really know much of what that is and am still trying to figure it out.
Let's get into some specifics.
First I edit a file called 'hello-world' to contain:
#!/bin/bash
echo "Hello, world!"
Note that this filename has no "extension". Though heuristics based on file extension are sometimes used (espeically in windows) to determine a file type, unix typically sees a file "extension" as part of the arbitrary file name. The thing that makes this a potentially executable bash script is the specification of that interpreter on the shebang line.
We can run our script right now from bash, just as we could if we wrote a python script.
$ bash hello-world
hello, world!
To make the bash implicit, we mark the file as executable. This enables the linux operating system to consult the beginning "magic bytes" of the file to determine how to run it. Thes beginning bytes might signify an ELF file (a compiled executable, written in eg C, C++, or go). Or, it might be #! which just so happens means , "read the rest of this first line to determine the command to run, and pass the rest of this file into that command to be interpreted.
$ chmod +x hello-world
ls -l will show us the "permissions" on the file (more accurately called the "file mode", hence chmod rather than chperm) . The x stands for executable, so we have enabled the use of the leading bytes to determine method of execution. Remember, the first two bytes of this file, and the rest of that first line, then specify that this file should be "run through bash" so to speak.
$ ls -l hello-world
-rwxr-xr-x 1 danfarrell staff 33 Dec 27 20:02 hello-world
Now we can run the file from the current directory:
$ ./hello-world
hello, world!
At this point, the only difference between this command and any other on the system, is that you have to specify its location. That's because my current directory is not in the system path. In short, the path (accessible in a unix shell via the $PATH variable) specifies an ordered list of locations that should be searched for a specified command whose location is not otherwise specified.
For example, there's a very common program called whoami. I can run it directly from my terminal without specifying a location of the executable:
$ whoami
danfarrell
This is because there's a location in my $PATH in which the shell was able to find that command. Let's take a closer look. First, here's my path:
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin
And there's also a convenient program called whereis which can help show which path elements supply a named executable:
$ whereis whoami
/usr/bin/whoami
Sure enough, whoami is in one of the elements of the $PATH. (Actually I shared a simplified $PATH. Yours might be somewhat longer).
Finally, then, we can get to the last thing. If I put hello-world in one of the $PATH elements, I will be able to invoke it without a path. There are two ways to do this: we can move the executable to a location specified in the path, or we can add a new location to the path. For simplicity's sake I'll choose the first of these.
$ sudo cp hello-world /usr/local/bin/
Password:
I needed to use sudo to write to /usr/local/bin because it's not accessible as my user directly - that's quite standard.
Finally, I've achieved the goal of being able to run my very important program from any location, without specifying the executable's location.
$ hello-world
hello, world!
$ which hello-world
/usr/local/bin/hello-world
It works! I've created what might be described as a "custom terminal command".
What I do know is that $1 is an arg is it possible to make it a variable and then get the first letter like you could in python?
Well, one option would be to simply write the custom terminal command in python. If python is available,
$ which python
/usr/bin/python
You can specify it in a shebang just like a shell can be:
#!/usr/bin/env python
print("hello, world!"[0])
$ hello-world
h
it works!
Okay, confession time. I actually used #!/usr/bin/env python, not /usr/bin/python. env helps find the correct python to use in the user's environment, rather than hard coding one particular python. If you've been using python during the very long running python 2 to python 3 migration, you can no doubt understand why I"m reticent to hard code a python executable in my program.
It's certainly possible to get the first letter of a string in a bash script. But it's also very possible to write a custom command in a program other than shell. Python is an excellent choice for string manipulation, if you know it. I often use python for shell one-liners that need to interact with json, a format that doesn't lend itself well to standard unix tool stream editing.
Anyway, at the expense of incurring SO community's ire by reanswering an "already answered" question, I'll include a version in shell (Credit goes to David C Rankin)
#!/bin/bash
echo "${1:0:1}"
$ hello-world hiworld
h

Running "<" command between two different directories

I'm working a small JS project and trying to get a script to run, which compiles some source files that are written in our own "language x".
To run the compiler normally you would use the command ./a.out < source.x And it would print out success or compilation errors etc.
In the case now, I'm trying to working between two directories and using this command:
sudo ~/Documents/server/xCompiler/./a.out < ~/Documents/server/xPrograms/source.x
But this produces no output into the terminal at all and doesn't affect the output files. Is there somthing I'm doing wrong with the use of <? I'm planning to use it in child_process.exec within a node server later.
Any help would be appreciated, I'm a bit stumped.
Thanks.
Redirection operators (<, >, and others like them) describe operations to be performed by the shell before your command is run at all. Because these operations are performed by the shell itself, it's extremely unlikely that they would be broken in a way specific to an individual command: When they're performed, the command hasn't started yet.
There are, however, some more pertinent ways your first and second commands differ:
The second (non-working) one uses a fully-qualified path to the compiler itself. That means that the directory that the compiler is found in and the current working directory where the compiler is running can differ. If the compiler looks for files in or in locations relative to its current working directory, this can cause a failure.
The second uses sudo to escalate privileges to run the compiler. This means you're running as a different user, with most environment variables cleared or modified (unless explicitly whitelisted in /etc/sudoers) during the switch -- and has widespread potential to break things depending on details of your compiler's expectations about its runtime environment beyond what we can reasonably be expected to diagnose here.
That first one, at least, is amenable to a solution. In shell:
xCompile() {
(cd ~/Documents/server/xCompiler && exec ./a.out "$#")
}
xCompile < ~/Documents/server/xPrograms/source.x
Using exec is a performance optimization: It balances the cost of creating a new subshell (with the parenthesis) by consuming that subshell to launch the compiler rather than launching it as a subprocess.
Calling the node child_process.exec(), you can simply pass the desired runtime directory in the cwd argument, so no shell function is necessary.

Coding a relative path to file in OS X [duplicate]

I have a Haskell script that runs via a shebang line making use of the runhaskell utility. E.g...
#! /usr/bin/env runhaskell
module Main where
main = do { ... }
Now, I'd like to be able to determine the directory in which that script resides from within the script, itself. So, if the script lives in /home/me/my-haskell-app/script.hs, I should be able to run it from anywhere, using a relative or absolute path, and it should know it's located in the /home/me/my-haskell-app/ directory.
I thought the functionality available in the System.Environment module might be able to help, but it fell a little short. getProgName did not seem to provide useful file-path information. I found that the environment variable _ (that's an underscore) would sometimes contain the path to the script, as it was invoked; however, as soon as the script is invoked via some other program or parent script, that environment variable seems to lose its value (and I am needing to invoke my Haskell script from another, parent application).
Also useful-to-know would be whether I can determine the directory in which a pre-compiled Haskell executable lives, using the same technique or otherwise.
As I understand it, this is historically tricky in *nix. There are libraries for some languages to provide this behavior, including FindBin for Haskell:
http://hackage.haskell.org/package/FindBin
I'm not sure what this will report with a script though. Probably the location of the binary that runhaskell compiled just prior to executing it.
Also, for compiled Haskell projects, the Cabal build system provides data-dir and data-files and the corresponding generated Paths_<yourproject>.hs for locating installed files for your project at runtime.
http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/authors.html#paths-module
There is a FindBin package which seems to suit your needs and it also works for compiled programs.
For compiled executables, In GHC 7.6 or later you can use System.Environment.getExecutablePath.
getExecutablePath :: IO FilePathSource
Returns the absolute pathname of the current executable.
Note that for scripts and interactive sessions, this is the path to the
interpreter (e.g. ghci.)
There is executable-path which worked with my runghc script. FindBin didn't work for me as it returned my current directory instead of the script dir.
I could not find a way to determine script path from Haskell (which is a real pity IMHO). However, as a workaround, you can wrap your Haskell script inside a shell script:
#!/bin/sh
SCRIPT_DIR=`dirname $0`
runhaskell <<EOF
main = putStrLn "My script is in \"$SCRIPT_DIR\""
EOF

Prog Challenge - Find paths to files called from configuration files or scripts

I have no idea how to do that, so I come here for help :) Here is what I'd need. I need to parse some configuration files or bash/sh scripts on a Red Hat Linux system, and look for the paths to the files/commands/scripts meant to be executed by them. The configuration files can have different syntax or be using different languages.
Here are the files I have to look at:
Config scripts:
/etc/inittab
/var/spool/cron/root
/var/spool/cron/tabs/root
/etc/crontab
/etc/xinetd.conf
Files located under /etc/cron.d/* recursively
Bash / Sh scripts:
Files located under /etc/init.d/* or /etc/rc.d/* recursively. These folders contain only shell scripts so maybe all the other files listed above need separate treatment.
Now here's the challenges that I can think of:
The paths within the files may be absolute or relatives ;
The paths within the files may be at the beginning of lines or preceded by a character such as space, colon or semicolon ;
File paths expressed as arguments to commands/scripts must be ignored ;
Paths to directories must be ignored ;
Shell functions or built-in commands must be ignored ;
Some examples (extracted from /etc/init.d/avahi-daemon):
if [ -s /etc/localtime ]; then
cp -fp /etc/localtime /etc/avahi/etc >/dev/null 2>&1
-> Only /bin/cp and /bin/[ must be returned in the snippet above (its the only commands actually executed)
AVAHI_BIN=/usr/sbin/avahi-daemon
$AVAHI_BIN -r
-> /usr/sbin/avahi-daemon must be returned, but only because the variable is called after.
Note that I do not have access to the actual filesystem, I just have a copy of the files to parse.
After writing this up, I realize how complicated it is and unlikely to have a 100% working solution... But if you like programming challenges :)
The good part is I can use any scripting language: bash/sh/grep/sed/awk, php, python, perl, ruby or a combination of these..
I tried to start writing up in PHP but I am struggling to get coherent results.
Thanks!
The language you use to implement this doesn't matter. What matters is that the problem is undecidable, because it is equivalent to the halting problem.
Just as we know that it is impossible to determine if a program will halt, it is impossible to know if a program will call another program. For example, you may think your script will invoke X then Z, but if X never returns, Z will never be invoked. Also, you may not notice that your script invokes Y, because the string Y may be determined dynamically and never actually appear in the program text.
There are other problems which may stymie you along the way, too, such as:
python -c 'import subprocess; subprocess.call("ls")'
Now you need not only a complete parser for Bash, but also for Python. Not to mention solve the halting problem in Python.
In other words, what you want is not possible. To make it feasible you would have to significantly reduce the scope of the problem, e.g. "Find everything starting with /usr/bin or /bin that isn't in a comment". And it's unclear how useful that would be.

Best/conventional method of dealing with PATH on multiple UNIX environments

The main question here is: is there a standard method of writing UNIX shell scripts that will run on multiple UNIX platforms.
For example, we have many hosts running different flavours of UNIX (Solaris, Linux) and at different versions all with slightly different file system layouts. Some hosts have whoami in /usr/local/gnu/bin/, and some in /usr/bin/.
All of our scripts seem to deal with this in a slightly different way. Some have case statements on the architecture:
case "`/script/that/determines/arch`" in
sunos-*) WHOAMI=`/usr/local/gnu/bin/whoami` ;;
*) WHOAMI=`/usr/bin/whoami` ;;
esac
With this approach you know exactly what binary is being executed, but it's pretty cumbersome if there are lots of commands being executed.
Some just set the PATH (based on the arch script above) and call commands by just their name. This is convenient, but you lose control over which command you run, e.g. if you have:
/bin/foo
/bin/bar
/other/bin/foo
/other/bin/bar
You wouldn't be able to use both /bin/foo and /other/bin/bar.
Another approach I could think of would to have a local directory on each host with symlinks to each binary that would be needed on each host. E.g.:
Solaris host:
/local-bin/whoami -> /usr/local/gnu/bin/whoami
/local-bin/ps -> /usr/ucb/ps
Linux host:
/local-bin/whoami -> /usr/bin/whoami
/local-bin/ps -> /usr/ps
What other approaches do people use? Please don't just say write the script in Python... there are some tasks where bash is the most succinct and practical means of getting a simple task accomplished.
I delegate all this to my .profile, which has an elaborate series of internal functions to try likely directories to add to the PATH. Except on OSX, which I believe is basically impossible because Darwin/Fink/Ports each wants to control your PATH, this approach works well enough.
If I cared about ambiguity (multiple instances of foo in different directories on my PATH), I would modify the functions so as to identify all ambiguous commands and require manual resolution. But for my environment, this has never been an issue. My main concern has been to have a single .profile that runs on Debian, Red Hat, Solaris, BSD, and so on. The 'try every directory that could possibly work' approach works well enough.
To set PATH to POSIX-compliant directories you can do the following at the beginning of your Bash scripts:
unset PATH
PATH="$(PATH=/bin:/usr/bin getconf PATH)"
export PATH
If you know you can use Bash across different Unix systems, you may use shell builtins instead of external commands to improve portability as well. Example:
help type
type -a type
type -P ls # replaces: which ls
To disable alias / function lookup for commands such as find, ls, ... in Bash you may use the command builtin. Example:
help command
command ls -l
If you want to be 100% sure to execute a specific command located in a specific directory, using the full executable path seems the way to go. First match wins in PATH lookup!

Resources