In BASH, I use "pushd . " command to save the current directory on the stack.
After issuing this command in couple of different directories, I have multiple directories saved on the stack which I am able to see by issuing command "dirs".
For example, the output of "dirs" command in my current bash session is given below -
0 ~/eclipse/src
1 ~/eclipse
2 ~/parboil/src
Now, to switch to 0th directory, I issue a command "cd ~0".
I want to create a bash alias command or a function for this command.
Something like "xya 0", which will switch to 0th directory on stack.
I wrote following function to achieve this -
xya(){
cd ~$1
}
Where "$1" in above function, is the first argument passed to the function "xya".
But, I am getting the following error -
-bash: cd: ~1: No such file or directory
Can you please tell what is going wrong here ?
Generally, bash parsing happens in the following order:
brace expansion
tilde expansion
parameter, variable, arithmetic expansion; command substitution (same phase, left-to-right)
word splitting
pathname expansion
Thus, by the time your parameter is expanded, tilde expansion is already finished and will not take place again, without doing something explicit like use of eval.
If you know the risks and are willing to accept them, use eval to force parsing to restart at the beginning after the expansion of $1 is complete. The below tries to mitigate the damage should something that isn't eval-safe is passed as an argument:
xya() {
local cmd
printf -v cmd 'cd ~%q' "$1"
eval "$cmd"
}
...or, less cautiously (which is to say that the below trusts your arguments to be eval-safe):
xya() {
eval "cd ~$1"
}
You can let dirs print the absolute path for you:
xya(){
cd "$(dirs -${1-0} -l)"
}
Related
Currently trying to move all of my aliases from .bash_profile to .zshrc. However, found a problem with one of the longer aliases I use for substituting root to ubuntu when passing a command to access AWS instances.
AWS (){
cd /Users/user/aws_keys
cmd=$(echo $# | sed "s/root/ubuntu/g")
$cmd[#]
}
The error I get is AWS:5: command not found ssh -i keypair.pem ubuntu#ec1.compute.amazonaws.com
I would really appreciate any suggestions!
The basic problem is that the cmd=$(echo ... line is mashing all the arguments together into a space-delimited string, and you're depending on word-splitting to split it up into a command and its arguments. But word-splitting is usually more of a problem than anything else, so zsh doesn't do it by default. This means that rather than trying to run the command named ssh with arguments -i, keypair.pem, etc, it's treating the entire string as the command name.
The simple solution is to avoid mashing the arguments together, so you don't need word-splitting to separate them out again. You can use a modifier to the parameter expansion to replace "root" with "ubuntu". BTW, I also strongly recommend checking for error when using cd, and not proceeding if it gets an error.
So something like this:
AWS (){
cd /Users/user/aws_keys || return $?
"${#//root/ubuntu}"
}
This syntax will work in bash as well as zsh (the double-quotes prevent unexpected word-splitting in bash, and aren't really needed in zsh).
BTW, I'm also a bit nervous about just blindly replacing "root" with "ubuntu" in the arguments; what if it occurs somewhere other than the username, like as part of a filename or hostname?
My script is executable and I run it as sudo. I tried many workarounds and alternatives to the ">>" operator but nothing seemed to work properly.
My script:
#! /bin/bash
if [[ -z "$1" || -z "$2" ]]; then
exit 1
else
root=$1
fileExtension=$2
fi
$(sudo find $root -regex ".*\.${fileExtension}") >> /home/mux/Desktop/AllFilesOf${fileExtension}.txt
I tried tee, sed and dd of, I also tried running it with bash -c or in sudo -i , nothing worked. Either i get an empty file or a Permission denied error.
I searched thoroughly and read many command manuals but I can't get it to work
The $() operator performs command substitution. When the overall command line is expanded, the command within the parentheses is executed, and the whole construct is replaced with the command's output. After all expansions are performed, the resulting line is executed as a command.
Consider, then, this simplified version of your command:
$(find /etc -regex ".*\.conf") >> /home/mux/Desktop/AllFilesOfconf.txt
On my system that will expand to a ginormous command of the form
/etc/rsyslog.conf /etc/pnm2ppa.conf ... /etc/updatedb.conf >> /home/mux/Desktop/AllFilesOfconf.txt
Note at this point that the redirection is separate from, and therefore independent of, the command in the command substitution. Expanding the command substitution therefore does not cause anything to be written to the target file.
But we're not done! That was just the expansion. Bash now tries to execute the result as a command. In particular, in the above example it tries to execute /etc/rsyslog.conf as a command, with all the other file names as arguments, and with output redirected as specified. But /etc/rsyslog.conf is not executable, so that will fail, producing a "permission denied" message. I'm sure you can extrapolate from there what effects different expansions would produce.
I don't think you mean to perform a command substitution at all, but rather just to run the command and redirect its output to the given file. That would simply be this:
sudo find $root -regex ".*\.${fileExtension}" >> /home/mux/Desktop/AllFilesOf${fileExtension}.txt
Update:
As #CharlesDuffy observed, the redirection in that case is performed with the permissions of the user / process running the script, just as it is in your original example. I have supposed that that is intentional and correct -- i.e. that the script is being run by user 'mux' or by another user that has access to mux's Desktop directory and to any existing file in it that the script might try to create or update. If that is not the case, and you need the redirection, too, to be privileged, then you can achieve it like so:
sudo -s <<END
find $root -regex ".*\.${fileExtension}" >> /home/mux/Desktop/AllFilesOf${fileExtension}.txt
END
That runs an interactive shell via sudo, with its input is redirected from the heredoc. The variable expansions are performed in the host shell from which sudo is executed. In this case the redirection is performed with the identity obtained via sudo, which affects access control, as well as ownership of the file if a new one is created. You could add a chown command if you don't want the output files to be owned by root.
I am trying to edit my .bashrc file with a custom function to launch xwin. I want it to be able to open in multiple windows, so I decided to make a function that accepts 1 parameter: the display number. Here is my code:
function test(){
a=$(($1-0))
"xinit -- :$a -multiwindow -clipboard &"
}
The reason why I created a variable "a" to hold the input is because I suspected that the input was being read in as a string and not a number. I was hoping that taking the step where I subtract the input by 0 would convert the string into an integer, but I'm not actually sure if it will or not. Now, when I call
test 0
I am given the error
-bash: xinit -- :0 -multiwindow -clipboard &: command not found
How can I fix this? Thanks!
Because the entire quoted command is acting as the command itself:
$ "ls"
a b c
$ "ls -1"
-bash: ls -1: command not found
Get rid of the double quotation marks surrounding your xinit:
xinit -- ":$a" -multiwindow -clipboard &
In addition to the double-quotes bishop pointed out, there are several other problems with this function:
test is a standard, and very important, command. Do not redefine it! If you do, you risk having some script (or sourced file, or whatever) run:
if test $num -eq 5; then ...
Which will fire off xinit on some random window number, then continue the script as if $num was equal to 5 (whether or not it actually is). This way lies madness.
As chepner pointed out in a comment, bash doesn't really have an integer type. To it, an integer is just a string that happens to contain only digits (and maybe a "-" at the front), so converting to integer is a non-opertation. But what you might want to do is check whether the parameter got left off. You can either check whether $1 is empty (e.g. if [[ -z "$1" ]]; then echo "Usage: ..." >&2 etc), or supply a default value with e.g. ${1:-0} (in this case, "0" is used as the default).
Finally, don't use the function keyword. bash tolerates it, but it's nonstandard and doesn't do anything useful.
So, here's what I get as the cleaned-up version of the function:
launchxwin() {
xinit -- ":${1:-0}" -multiwindow -clipboard &
}
That happens because bash interprets everything inside quotes as a String. A command is an array of strings which the first element is a binary file or a internal shell command. Subsequent strings in the array are taken as argument.
When you type:
"xinit -- :$a -multiwindow -clipboard &"
the shell thinks that everything you wrote is a command. Depending on the command/program you ran all the rest of the arguments can be a single string. But mostly you use quotes only if you are passing a argument that has spaces inside like:
mkdir "My Documents"
That creates a single directory named My Documents. Also, you could escape spaces like this.
mkdir My\ Documents
But remember, "$" is a special character like "\". It gets interpreted by the shell as a variable. "$a" will be substituted by its value before executing. If you use a simple quote ('$a') it will not be interpreted by the shell.
Also, "&" is a special character that executes the command in background. You should probably pass it outside the quotes also.
I want to write a script that will change to different directories depending on my input. something like this:
test.sh:
#!/bin/bash
ssh machine001 '(chdir ~/dev$1; pwd)'
But as I run ./test.sh 2 it still goes to ~/dev. It seems that my argument gets ignored. Am I doing anything very stupid here?
Bash ignores any variable syntax inside the single-quoted(') strings. You need double quotes(") in order to make a substitution:
#!/bin/bash
ssh machine001 "(chdir ~/dev$1; pwd)"
The parameter is enclosed in single quotes, so it isn't expanded on the local side. Use double-quotes instead.
#!/bin/bash
ssh machine001 "chdir ~/dev$1; pwd"
There's no need for the (...), since you are only running the pair of commands then exiting.
I am trying to get Bash to execute the following minimized example properly:
# Runs a command, possibly quoted (i.e. single argument)
function run()
{
$*
}
run ls # works fine
run "ls" # also works
run "ls `pwd`" # also works, but pwd is eagerly evaluated (I want it to evaluate inside run)
run "ls \\\`pwd\\\`" # doesn't work (tried other variants as well)
To summarize, I am trying to get the ability of having commands in quoted strings (or not), and not having any of the command, including nested shell commands through backticks, calculated values, etc., evaluated before run() is called. Is this possible? How can I achieve this?
Well the way to do this sort of thing is to use the eval function associated with an escaped '$' :
function run()
{
eval $*
}
my_command="ls \$(pwd)"
Escaping '$' as '\$' ensure that my_command will be set to "ls $(pwd)" with no substitution. Then eval will provide the substitution ^^
then
run $my_command
cd ..
run $my_command
prove that you get your functionnality !
my2c