Bash executing the result of a command - bash

I've got a small script which is returning a string/path. This path is an executable, how can I run the executable? Thank you.
Example:
my_command commands other commands ... returns /home/mydesktop/myexecutable
I'd need to execute /home/mydesktop/myexecutable

You could try this:
`your_command args etc`
The backticks get replaced by the output of the command and that is then evaluated. Since it is at the start of the input line, bash tries to execute it.
This is a handy trick to know, since you can use it for all sorts of fun:
cp your_file .backup/`date "+%Y-%m-%d"`_your_file
will prepend the current date to a copy of your file for poor mans backup...
EDIT: In the comments, we learned, that you should actually be using the $() syntax. So, that amounts to:
$(your_command args etc)
and
cp your_file .backup/$(date "+%Y-%m-%d")_your_file
since you can nest this...

If it returns an executable script/program use:
chmod +x /home/mydesktop/myexecutable
/home/mydesktop/myexecutable
If it returns an executable STRING use:
eval STRING

Related

CMake's execute_process and arbitrary shell scripts

CMake's execute_process command seems to only let you, well, execute a process - not an arbitrary line you could feed a command shell. The thing is, I want to use pipes, file descriptor redirection, etc. - and that does not seem to be possible. The alternative would be very painful for me (I think)...
What should I do?
PS - CMake 2.8 and 3.x answer(s) are interesting.
You can execute any shell script, using your shell's support for taking in a script within a string argument.
Example:
execute_process(
COMMAND bash "-c" "echo -n hello | sed 's/hello/world/;'"
OUTPUT_VARIABLE FOO
)
will result in FOO containing world.
Of course, you would need to escape quotes and backslashes with care. Also remember that running bash would only work on platforms which have bash - i.e. it won't work on Windows.
execute_process command seems to only let you, well, execute a process - not an arbitrary line you could feed a command shell.
Yes, exactly this is written in documentation for that command:
All arguments are passed VERBATIM to the child process. No intermediate shell is used, so shell operators such as > are treated as normal arguments.
I want to use pipes
Different COMMAND within same execute_process invocation are actually piped:
Runs the given sequence of one or more commands with the standard output of each process piped to the standard input of the next.
file descriptor redirection, etc. - and that does not seem to be possible.
For complex things just prepare separate shell script and run it using execute_process. You can pass variables from CMake to this script using its parameters, or with prelimiary configure_file.
I needed to pipe two commands one after the other and actually learned that each COMMAND of the execute_process is piped already. So at least that much is resolved by simply adding commands one after the other:
execute_process(
COMMAND echo "Hello"
COMMAND sed -e 's/H/h/'
OUTPUT_VARIABLE GREETINGS
OUTPUT_STRIP_TRAILING_WHITESPACE)
Now the variable GREETINGS is set to hello.
If you indeed need a lot of file redirection (as you stated), you probably want to write an external script and then execute that script from CMakeLists.txt. It's really difficult to get all the escaping right in CMake.
If you can simplify your scripts to one command generating a file, then another handling that file, etc. then you can always use the INPUT_FILE and OUTPUT_FILE options. Or pass a filename to your command for the input.
It's often much cleaner to handle one file at a time. Although I understand that some commands may need multiple sources and destinations.

Difference between typing a shell command, and save it to a file and using `cat myfile` to execute it?

I have an rsync command that works as expected when I type it directly into a terminal. The command includes several --include='blah' and --exclude='foo' type arguments. However, if I save that command to a one-line file called "myfile" and I try `cat myfile` (or, equivalently $(cat myfile)), the rsync command behaves differently.
I'm sure it is the exact same command in both cases.
Is this behavior expected/explainable?
I've found the answer to this question. The point is that the cat command takes the contents of the file and treats it like a string. Any string operators (like the escape operator, ) are executed. Then, the final string output is what is passed to a command via the backticks.
As a solution, I've just made "myfile" a shell script that I can execute rather than trying to use cat.

Understanding script language

I'm a newbie to scripting languages trying to learn bash programming.
I have very basic question. Suppose I want to create three folders like $HOME/folder/
with two child folders folder1 and folder2.
If I execute command in shell like
mkdir -p $HOME/folder/{folder1,folder2}
folder will be created along with child folder.
If the same thing is executed through script I'm not able get expected result. If sample.sh contains
#!/bin/sh
mkdir -p $HOME/folder/{folder1,folder2}
and I execute sh ./sample.sh, the first folder will be created then in that a single {folder1,folder2} directory is created. The separate child folders are not created.
My query is
How the script file works when we compared to as terminal command? i.e., why is it not the same?
How to make it work?
bash behaves differently when invoked as sh, to more closely mimic the POSIX standard. One of the things that changes is that brace expansion (which is absent from POSIX) is no longer recognized. You have several options:
Run your script using bash ./sample.sh. This ignores the hashbang and explicitly uses bash to run the script.
Change the hashbang to read #!/bin/bash, which allows you to run the script by itself (assuming you set its execute bit with chmod +x sample.sh).
Note that running it as sh ./sample.sh would still fail, since the hashbang is only used when running the file itself as the executable.
Don't use brace expansion in your script. You could still use as a longer method for avoiding duplicate code:
for d in folder1 folder2; do
mkdir -p "$HOME/folder/$d"
done
Brace expansion doesn't happen in sh.
In sh:
$ echo {1,2}
produces
{1,2}
In bash:
$ echo {1,2}
produces
1 2
Execute your script using bash instead of using sh and you should see expected results.
This is probably happening because while your tags indicate you think you are using Bash, you may not be. This is because of the very first line:
#/bin/sh
That says "use the system default shell." That may not be bash. Try this instead:
#!/usr/bin/env bash
Oh, and note that you were missing the ! after #. I'm not sure if that's just a copy-paste error here, but you need the !.

How to run multiple Unix commands in one time?

I'm still new to Unix. Is it possible to run multiple commands of Unix in one time? Such as write all those commands that I want to run in a file, then after I call that file, it will run all the commands inside that file? or is there any way(or better) which i do not know?
Thanks for giving all the comments and suggestions, I will appreciate it.
Short answer is, yes. The concept is known as shell scripting, or bash scripts (a common shell). In order to create a simple bash script, create a text file with this at the top:
#!/bin/bash
Then paste your commands inside of it, one to a line.
Save your file, usually with the .sh extension (but not required) and you can run it like:
sh foo.sh
Or you could change the permissions to make it executable:
chmod u+x foo.sh
Then run it like:
./foo.sh
Lots of resources available on this site and the web for more info, if needed.
echo 'hello' && echo 'world'
Just separate your commands with &&
We can run multiple commands in shell by using ; as separator between multiple commands
For example,
ant clean;ant
If we use && as separator then next command will be running if last command is successful.
you can also use a semicolon ';' and run multiple commands, like :
$ls ; who
Yep, just put all your commands in one file and then
bash filename
This will run the commands in sequence. If you want them all to run in parallel (i.e. don't wait for commands to finish) then add an & to the end of each line in the file
If you want to use multiple commands at command line, you can use pipes to perform the operations.
grep "Hello" <file-name> | wc -l
It will give number of times "Hello" exist in that file.
Sure. It's called a "shell script". In bash, put all the commands in a file with the suffix "sh". Then run this:
chmod +x myfile.sh
then type
. ./myFile
or
source ./myfile
or just
./myfile
To have the commands actually run at the same time you can use the job ability of zsh
$ zsh -c "[command1] [command1 arguments] & ; [command2] [command2 arguments]"
Or if you are running zsh as your current shell:
$ ping google.com & ; ping 127.0.0.1
The ; is a token that lets you put another command on the same line that is run directly after the first command.
The & is a token placed after a command to run it in the background.

run the output of a script as a standalone bash command

suppose you have a perl script "foobar.pl" that prints the following to stdout
date -R
and you want to run whatever that perl script outputs as a standalone bash command (don't worry about security problems as this is running in a trusted environment).
How do you get bash to recognize this as a standalone command?
I've tried using xargs, but that seems to want to pass arguments only to a pre-defined command.
I want the perl script to be able to output any arbitrary command.
$command = 'date -R'
system($command); ## in the perl script
the above does not work because I want it to run in an existing cygwin environment ...
foobar.pl | xargs bash -i {}
the above does not work because bash seems to be running a new process and thus the initialization and settings from bash_profile don't get instantiated.
`foobar.pl`
Bad:
`perl foo.pl`
$(perl foo.pl)
Why is this bad? Because of so many reasons; most notably:
Wordsplitting: What you're doing here is taking the output of the perl script, splitting it into chunks wherever there are spaces, tabs or newlines, and taking those chunks as arguments to the first chunk which is the command to run. In really extremely simplistic cases like $(echo 'date +%s') it might work; but that's just a really bad representation of what you're REALLY doing here.
You cannot do quoting or use any other bash shell features like parameter expansion, bash keywords, etc.
Good, but inconvenient:
perl foo.pl > mytmpfile; bash mytmpfile
Creating a temporary file to put your perl script's output into and then running that with bash works, but it's inconvenient as you need to create (and clean up!) your temporary file and have it in a portably writable (and secure!) location.
Also remember not to use . or source to execute the temporary file unless you really intend to run it all in the active shell. Moreover, when you use . or source, you won't be able to reliably clean up your temporary file afterward.
Probably the best solution:
perl foo.pl | bash
This is pretty safe all-round ("safe" in the context of, least bug-prone) assuming your perl script outputs correct bash syntax, of course.
Alternatives that do pretty much the same thing:
bash < <(perl foo.pl)
bash <(perl foo.pl)
Given the perl file:
print "date";
the following bash command will do it.
> $(perl qq.pl)
Mon Apr 6 11:02:07 WAST 2009
But that is run in a separate shell. If you really want to invoke it in the context of the current shell, do this:
$ perl qq.pl >/tmp/qq.$$ ; . /tmp/qq.$$ ; rm -f /tmp/qq.$$
Mon Apr 6 11:04:59 WAST 2009
Try:
foobar.pl | bash
I don't think this is exactly what you're looking for, but its what I've got :-)
perl foo.pl > /tmp/$$.script; bash /tmp/$$.script; rm /tmp/$$.script
Good luck!
Try with open($fh,"-|",$arg1,$arg2)

Resources