I have a simple shell script as follows:
myfunc()
{
#print hello world
echo "Hello World";
}
myfunc
The script works fine when I execute in linux pc but when I run the same in uclinux, I get the error as "syntax error".
What could be reason for the problem?
Update:
The following code works in uclinux:
####\#!/bin/sh
echo "Hello World"
But, the following code is not working:
####!/bin/sh
myfunc()
{
#print hello world
echo "Hello World";
}
myfunc
The result depends what shell you run. Most uclinux's shells are actually symbolic links to Busybox. Busybox implements various tiny shells for different memory foot print requirements. As I remember, only ash supports function syntax. Check your busybox version and its build config.
Maybe your installation of uclinux uses a different shell?
Saying "shell script doesn't work" is like saying "my source code doesn't work". Of course the phrase only makes sense if you say what language your source code is in. Similarly for shell script: is it bash? is it ksh? is it tcsh? For uclinux I highly suspect it's busybox.
Your shell script should have a shebang line that will cause the script to be executed by the shell you designate. This can reduce or eliminate many unexpected errors due to differences in syntax between shells that are caused when the script is executed by the current (or default) shell which may be different for a number of reasons.
The first line of the script file should be similar to:
#!/bin/sh
with the path and name of the shell appropriate to your needs.
Is your actual myfunc defined in one line as you show it? That's a syntax error since you're commenting out lots of stuff including the }.
If you put myfunc() { #print hello world echo "Hello World"; } on one line, then
#print hello world echo "Hello World"; } gets interpreted as comment. Remove the part #print hello world and try again.
if using Busybox and hush
Configure Your BusyBox Shell Part To suppot Function:
make busyBox-menuconfig
Shells -> Support Function .... (Should be checked )
Related
I have few bash functions like
#!/bin/sh
git-ci() {
...
}
When I was not using fish I had a source ~/.my_functions line in my ~/.bash_profile but now it doesn't work.
Can I use my bash functions with fish? Or the only way is to translate them into fish ones and then save them via funcsave xxx?
As #Barmer said fish doesn't care about compatibility because one of its goals is
Sane Scripting
fish is fully scriptable, and its syntax is simple, clean, and consistent. You'll never write esac again.
The fish folks think bash is insane and I personally agree.
One thing you can do is to have your bash functions in separate files and call them as functions from within fish.
Example:
Before
#!/bin/bash
git-ci() {
...
}
some_other_function() {
...
}
After
#!/bin/bash
# file: git-ci
# Content of git-ci function here
#!/bin/bash
# file: some_other_function
# Content of some_other_function function here
Then put your script files somewhere in your path. Now you can call them from fish.
Hope that helps.
The syntax for defining functions in fish is very different from POSIX shell and bash.
The POSIX function:
hi () {
echo hello
}
is translated to:
function hi
echo hello
end
There are other differences in scripting syntax. See the section titled Blocks in Fish - The friendly interactive shell for examples.
So it's basically not possible to try to use functions that were written for bash in fish, they're as different as bash and csh. You'll have to go through all your functions and convert them to fish syntax.
If you don't want to change all the syntax, one workaround is to simply create a fish function that runs a bash script and passes the arguments right along.
Example
If you have a function like this
sayhi () {
echo Hello, $1!
}
you'd just change it by stripping away the function part, and save it as an executable script
echo Hello, $1!
and then create a fish function which calls that script (with the name sayhi.fish, for example)
function sayhi
# run bash script and pass on all arguments
/bin/bash absolute/path/to/bash/script $argv
end
and, voila, just run it as you usually would
> sayhi ivkremer
Hello, ivkremer!
While working on a project written in bash by my former colleague, I noticed that all .sh files contain nothing but function definitions start with #!/bin/false, which is, as I understand, a safety mechanism of preventing execution of include-only files.
Example:
my_foo.sh
#!/bin/false
function foo(){
echo foontastic
}
my_script.sh
#!/bin/bash
./my_foo.sh # does nothing
foo # error, no command named "foo"
. ./my_foo.sh
foo # prints "foontastic"
However when I don't use #!/bin/false, effects of both proper and improper use are exactly the same:
Example:
my_bar.sh
function bar(){
echo barvelous
}
my_script.sh
#!/bin/bash
./my_bar.sh # spawn a subshell, defines bar and exit, effectively doing nothing
bar # error, no command named "bar"
. ./my_bar.sh
bar # prints "barvelous"
Since properly using those scripts by including them with source in both cases works as expected, and executing them in both cases does nothing from the perspective of a parent shell and generate no error message concerning invalid use, what is exactly the purpose of #!/bash/false in those script?
In general, let’s consider a file testcode with bash code in it
#!/bin/bash
if [ "$0" = "${BASH_SOURCE[0]}" ]; then
echo "You are executing ${BASH_SOURCE[0]}"
else
echo "You are sourcing ${BASH_SOURCE[0]}"
fi
you can do three different things with it:
$ ./testcode
You are executing ./testcode
This works if testcode has the right permissions and the right shebang. With a shebang of #!/bin/false, this outputs nothing and returns a code of 1 (false).
$ bash ./testcode
You are executing ./testcode
This completely disregards the shebang (which can even be missing) and it only requires read permission, not executable permission. This is the way to call bash scripts from a CMD command line in Windows (if you have bash.exe in your PATH...), since there the shebang machanism doesn’t work.
$ . ./testcode
You are sourcing ./testcode
This also completely disregards the shebang, as above, but it is a complete different matter, because sourcing a script means having the current shell execute it, while executing a script means invoking a new shell to execute it. For instance, if you put an exit command in a sourced script, you exit from the current shell, which is rarely what you want. Therefore, sourcing is often used to load function definitions or constants, in a way somewhat resembling the import statement of other programming languages, and various programmers develop different habits to differentiate between scripts meant to be executed and include files to be sourced. I usually don’t use any extension for the former (others use .sh), but I use an extension of .shinc for the latter. Your former colleague used a shebang of #!/bin/false and one can only ask them why they preferred this to a zillion other possibilities. One reason that comes to my mind is that you can use file to tell these files apart:
$ file testcode testcode2
testcode: Bourne-Again shell script, ASCII text executable
testcode2: a /bin/false script, ASCII text executable
Of course, if these include files contain only function definitions, it’s harmless to execute them, so I don’t think your colleague did it to prevent execution.
Another habit of mine, inspired by the Python world, is to place some regression tests at the end of my .shinc files (at least while developing)
... function definitions here ...
[ "$0" != "${BASH_SOURCE[0]}" ] && return
... regression tests here ...
Since return generates an error in executed scripts but is OK in sourced scripts, a more cryptic way to get the same result is
... function definitions here ...
return 2>/dev/null || :
... regression tests here ...
The difference in using #!/bin/false or not from the point of view of the parent shell is in the return code.
/bin/false always return a failing return code (in my case 1, but not sure if it is standard).
Try that :
./my_foo.sh //does nothing
echo $? // shows "1", a.k.a failing
./my_bar.sh //does nothing
echo $? // shows "0", a.k.a. everything went right
So, using #!/bin/false not only documents the fact that the script is not intended to be executed, but also produces an error return code when doing so.
I have a very basic problem using GNU Make 3.81 on Windows, I must be doing something very silly and I'm sure someone here will point it out in milliseconds. My problem is with using ";" to run multiple commands in the same shell.
As I understand it, make runs each line in its own command shell and so if you want to run two commands, one after the other, you must put them on the same line separated by a semicolon. In it's simplest form:
all:
echo hello; echo hello
...should produce the output:
hello
hello
But for me it produces the output:
hello; echo hello
In other words, the semicolon is being passed straight through to the shell, which doesn't make too much sense for cmd.exe.
I'm now ready to be embarrassed by everyone pointing out where I've gone wrong...
FYI, the reason I need this is that I'm using a $(foreach) loop which must execute two shell commands for each iteration.
You are be under the impression that ; is a GNU-make operator for executing multiple
commands in the same shell within a recipe. Not so. It is linux shell operator
for punctuating a sequence of commands on the same line. It is not an operator for
the Windows shell, cmd, so when the recipe:
echo hello; echo hello
is executed by make on Linux, it has the output you expect, but when executed by make
on Windows it just means echo this:
hello; echo hello
So, the answer is that your shell is the thing that has to understand that ; separates multiple commands on the same line, it's nothing to do with make. This is not the case for Windows cmd.exe but is presumably the case for the shells that normally arrive with environments that use make (Linux, msys etc.). In my case, a good workaround was this:
define useDef
echo hello
echo hello
endef
all:
$(call useDef)
With this form of "single-lined" definition I can invoke a multiline shell command inside $(foreach). Make still does each "hello" in its own shell but in my case that's OK because I'm appending outputs to a file. If you need two commands to be run in the same shell for some reason then, on Windows, you would need to write a separate batch file (which I suppose you could create from inside the makefile).
I know this question is relatively old, but I've stumbled across the same problem recently. The solution (for me) was quite simple. I replaced ; with &.
Basically
all:
echo hello & echo hello
will produce
hello
hello
in cmd.exe.
And it works with $(foreach) loops as well.
UPD: You also can use && instead of & if you don't want your commands to fail silently.
A script like the following:
#!/bin/bash
function hello {
echo "Hello World"
}
hello
will work fine but when I call it with nohup
nohup ./myScript.sh
the script no longer works and the following is printed:
./myScript.sh: 5: ./myScript.sh: function: not found
Hello World
./myScript.sh: 9: ./myScript.sh: Syntax error: "}" unexpected
I know that there is no point in using nohup for this script since it runs fairly quick but I have another script that is going to take several hours to run and I have used that function syntax in it. Is there a way to fix this?
Note that to declare a function in POSIX standard shell, you just write the function name followed by a pair of empty parentheses, and a body in curly brackets:
#!/bin/bash
hello() {
echo "Hello World"
}
hello
Some shells, like bash, also accept the function keyword to declare functions, but based on your error message, it looks like the shell that you are using does not accept it. For portability, it would be best to use the POSIX standard declaration style.
The reason you see the difference between running the script directly and running the script under nohup is the blank lines at the beginning of the file. The #! need to be the first two characters of the file in order to specify the interpreter, otherwise that is just a comment. It looks like when executed from within a Bash shell, the script gets interpreted by Bash and it recognizes the Bash extension. But when run from nohup, it gets interpreted by sh, which on some systems is a more bare-bones shell like dash that only supports the POSIX syntax above.
I want to be able to start a process and send input to it immediately.
Take Bash as an example.
The following code will enter another Bash process and then print "Hello World!" on the screen after I have terminated the process with "exit"
bash
echo "Hello World!"
Is there a way to enter bash and then print "Hello World!" INSIDE that process?
I'm using Ruby and Bash on Ubuntu.
UPDATE: This question was not intended to be Bash specific. Bash was just an example. It would be better if someone could post an answer that handles all other binaries.
You may be looking for the expect tool.
bash -c 'echo "Hello World!"'
You can also try writing bash script and invoking it:
bash ./myscript
or put #!/bin/bash as the first line in a text file and it will be invoked using bash like any other executable:
./myscript
Update0
Bash is an interpreter. There are many other interpreters, I'd highly recommend you take a look at Python, you can send instructions to be interpreted to these programs easily enough.
You might also be referring to the Unix IO-model, in which case you may want to ask a question relating to the use of piping with stdin and stdout.
%x(some external bash commands)
`ls -1`