This is a very noob question but I'm finding it difficult to nail it down quickly.
I want to write a bash script that lets me start and shut down tomcat as I require, regardless of my current working director.
I know how to write functions in bash, I want to know how to execute a .sh.
How do I do this?
Thanks
You want to use alias. Put this in your bashrc or profile for example:
alias lsft="find ./ -type f | grep -E '.*\.[a-zA-Z0-9]*$' | sed -e 's/.*\(\.[a-zA-Z0-9]*\)$/\1/' | sort | uniq -c | sort -n"
So if you logout/login or run source .bash-profile you can run lsft and it will list all the files by type in the current directory
Related
I wrote a basic script which changes the directory to a specific path and shows the list of folders, but my script shows the list of files of the current folder where my script lies instead of which I specify in script.
Here is my script:
#!/bin/bash
v1="$(ls -l | awk '/^-/{ print $NF }' | rev | cut -d "_" -f2 | rev)"
v2=/home/PS212-28695/logs/
cd $v2 && echo $v1
Does any one knows what I am doing wrong?
Your current script makes no sense really. v1 variable is NOT a command to execute as you expect, but due to $() syntax it is in fact output of ls -t at the moment of assignment and that's why you have files from current directory there as this is your working directory at that particular moment. So you should rather be doing ordinary
ls -t /home/PS212-28695/logs/
EDIT
it runs but what if i need to store the ls -t output to variable
Then this is same syntax you already had, but with proper arguments:
v1=$(ls -t /home/PS212-28695/logs/)
echo ${v1}
If for any reason you want to cd then you have to do that prior setting v1 for the same reason I explained above.
I'm trying to write a small shell script to find the most recently-added file in a directory and then move that file elsewhere. If I use:
ls -t ~/directory | head -1
and then store this in the variable VARIABLE_NAME, why can't I then then move this to ~/otherdirectory via:
mv ~/directory/$VARIABLE_NAME ~/otherdirectory
I've searched around here and Googled, but there doesn't seem to be any information on using variables in file paths? Is there a better way to do this?
Edit: Here's the portion of the script:
ls -t ~/downloads | head -1
read diags
mv ~/downloads/$diags ~/desktop/testfolder
You can do the following in your script:
diags=$(ls -t ~/downloads | head -1)
mv ~/downloads/"$diags" ~/desktop/testfolder
In this case, diags is assigned the value of ls -t ~/downloads | head -1, which can be called on by mv.
The following commands
ls -t ~/downloads | head -1
read diags
are probably not what you intend: the read command does not receive its input from the command before. Instead, it waits for input from stdin, which is why you believe the script to 'hang'. Maybe you wanted to do the following (at least this was my first erroneous attempt at providing a better solution):
ls -t ~/downloads | head -1 | read diags
However, this will (as mentioned by alvits) also not work, because each element of the pipe runs as a separate command: The variable diags therefore is not part of the parent shell, but of a subprocess.
The proper solution therefore is:
diags=$(ls -t ~/downloads | head -1)
There are, however, further possible problems, which would make the subsequent mv command fail:
The directory might be empty.
The file name might contain spaces, newlines etc.
I used to use fswatch v0.0.2 like so (in this instance to run django test suit when a file changed)
$>fswatch . 'python manage.py test'
this works fine.
I wanted to exclude some files that were causing the test to run more than once per save (Sublime text was saving a .tmp file, and I suspect .pyc files were also causing this)
So I upgraded fswatch to enable the -e mode.
However the way fswatch has changed which is causing me troubles - it now accepts a pipe argument like so:
$>fswatch . | xargs -n1 program
I can't figure out how to pass in arguments to the program here. e.g. this does not work:
$>fswatch . | xargs -n1 python manage.py test
nor does this:
$>fswatch . | xargs -n1 'python manage.py test'
how can I do this without packaging up my command in a bash script?
fswatch documentation (either the Texinfo manual, or the wiki, or README) have examples of how this is done:
$ fswatch [opts] -0 -o path ... | xargs -0 -n1 -I{} your full command goes here
Pitfalls:
xargs -0, fswatch -0: use it to make sure paths with newlines are interpreted correctly.
fswatch -o: use it to have fswatch "bubble" all the events in the set into a single one printing only the number of records in the set.
-I{}: specifying a placeholder is the trick you missed for xargs interpreting correctly your command arguments in those cases where you do not want the record (in this case, since -o was used, the number of records in the set) to be passed down to the command being executed.
Alternative answer not fighting xargs' default reason for being - passing on the output as arguments to the command to be run.
fswatch . | (while read; do python manage.py test; done)
Which is still a bit wordy/syntaxy, so I have created a super simple bash script fswatch-do that simplifies things for me:
#!/bin/bash
(while read; do "$#"; done)
usage:
fswatch -r -o -e 'pyc' somepath | fswatch-do python manage.py test someapp.SomeAppTestCase
When securing a Drupal or WordPress installation on a shared host that does not expose SSH access (a lousy situation, fwiw) lftp seems like the right approach to batch setting permissions for directories and files. The find command boasts that you can redirect its output, so one should be able to run a find, grep exclude to only match lines ending in "/" meaning a directory, and then set the permissions on such matches to 755 and perform the inverse on file matches and set to 644 and then fine tune specific files, such as settings.php and so forth.
lftp prompt> find . | grep "/$" | xargs chmod -v 755
Isn't working and I'm sure I have failed to chain these commands in the correct sequence and format.
How to get this to work?
Update: by "isn't working" I mean that the above command produces no output to the console, nor to the lftp error log. It isn't running these commands locally, fwiw. I'll reduce the command as a demonstration:
find . | grep "/$"
Will take the output of "find" and return matches, here, directories, by nature of the string match:
./daily/
./ffmpeg-installer/
./hourly/
./includes/
./includes/database/
./includes/database/mysql/
./and_so_forth_on_down
Which is cool, since I wish to perform a chmod (and internal command for lftp, with support varying by ftp server) So I expand the command like this:
find . | grep "/$" | xargs echo
Which outputs — nothing. No error output, either. The pipe from grep to xargs isn't happening.
My goal is to form the equivalent of:
chmod 755 ./daily/
chmod 755 ./ffmpeg-installer/
In lftp, the chmod command is performing an ftp-server-permissions change, not a local perms change.
For an explanation of why this does not work as expected, read on - for a solution to the given problem, scroll down.
The answer can be found in the manpage for lftp, which states that
"[s]ome commands allow redirecting their output (cat, ls, ...) to file or via pipe to external command."
So, when you are using a pipe like this on a command that does support redirection in lftp, you are piping its output to your local tools, which will eventually result in chmod trying to change the permissions for a file/directory on our local machine, and most likely fail in case you don't coincidally have the same directory layout in your current directory locally - which is probably the problem you encountered.
The grep + xargs pipe does work, I just tested the following:
lftp> find -d 2 | grep "/$"
./
./applications/
./lost+found/
./netinfo/
./packages/
./security/
./systems/
lftp> find -d 2 | grep "/$" | xargs echo
./ ./applications/ ./lost+found/ ./netinfo/ ./packages/ ./security/ ./systems/
My wild guess is that it did not appear to work for you because you did not specify a max-depth to find and the network connection + buffering in the pipe got in the way. When I try the same on a directory containing many files/subfolders it takes really long to finish and print. Did the command actually finish for you without output?
But still, what you are trying to do is not possible. As I stated, the right-hand-side of the pipe works with external commands (even if an inbuilt of the same name exists) as explained by the manual, so
lftp> chmod 644 foobar
and
lftp> echo "foobar" | xargs chmod 644
are not equivalent.
Yes, chmod is an inbuilt but used in a pipe in the client it will not execute the inbuilt - the manpage clearly states this and you can easily test this yourself. Try the following commands and check their output:
lftp> echo foo | uname -a
lftp> echo foo | ls -al
lftp> echo foo | chmod --help
lftp> chmod --help
Solution
As far as a solution to your problem is concerned, you can try something along the lines of:
#!/bin/bash
server="ftp.foo.bar"
root_folder="/my/path"
{
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep "/$"
quit
EOF
} | awk '{ printf "chmod 755 \"%s\"\n", $0 }'
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep -v "/$"
quit
EOF
} | awk '{ printf "chmod 644 \"%s\"\n", $0 }'
} | lftp "${server}"
This logs in to your server, cds to the folder where you want to recursively start changing the permissions, uses find + grep to find all directories, logs out, pipes this file list into awk to build chmod commands around it, repeats the whole process for files and then pipes the whole list of commands into a new lftp invocation to actually run the generated chmod commands.
You will also have to add your credentials to the lftp invocations and you might want to comment out the final | lftp "${server}" to check if it produces the desired output before you actually run the whole thing. Please report back if this works for you!
I tried to make an alias for committing several different git projects. I tried something like
cat projectPaths | \
xargs -I project git --git-dir=project/.git --work-tree=project commit -a
where projectPaths is a file containing the paths to all the projects I want to commit. This seems to work for the most part, firing up vi in sequence for each project so that I can write a commit msg for it. I do, however, get a msg:
"Vim: Warning: Input is not from a terminal"
and afterward my terminal is weird: it doesn't show the text I type and doesn't seem to output any newlines. When I enter "reset" things pretty much back to normal, but clearly I'm doing something wrong.
Is there some way to get the same behavior without messing up my shell?
Thanks!
Using the simpler example of
ls *.h | xargs vim
here are a few ways to fix the problem:
xargs -a <( ls *.h ) vim
or
vim $( ls *.h | xargs )
or
ls *.h | xargs -o vim
The first example uses the xargs -a (--arg-file) flag which tells xargs to take its input from a file rather than standard input. The file we give it in this case is a bash process substitution rather than a regular file.
Process substitution takes the output of the command contained in <( ) places it in a filedescriptor and then substitutes the filedescriptor, in this case the substituted command would be something like xargs -a /dev/fd/63 vim.
The second command uses command substitution, the commands are executed in a subshell, and their stdout data is substituted.
The third command uses the xargs --open-tty (-o) flag, which the man page describes thusly:
Reopen stdin as /dev/tty in the child process before executing the
command. This is useful if you want xargs to run an interactive
application.
If you do use it the old way and want to get your terminal to behave again you can use the reset command.
The problem is that since you're running xargs (and hence git and hence vim) in a pipeline, its stdin is taken from the output of cat projectPaths rather than the terminal; this is confusing vim. Fortunately, the solution is simple: add the -o flag to xargs, and it'll start git (and hence vim) with input from /dev/tty, instead of its own stdin.
The man page for GNU xargs shows a similar command for emacs:
xargs sh -c 'emacs "$#" < /dev/tty' emacs
(in this command, the second "emacs" is the "dummy string" that wisbucky refers to in a comment to this answer)
and says this:
Launches the minimum number of copies of Emacs needed, one after the
other, to edit the files listed on xargs' standard input. This example
achieves the same effect as BSD's -o option, but in a more flexible and
portable way.
Another thing to try is using -a instead of cat:
xargs -a projectPaths -I project git --git-dir=project/.git --work-tree=project commit -a
or some combination of the two.
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you should be able to do this:
cat projectPaths |
parallel -uj1 git --git-dir={}/.git --work-tree={} commit -a
In general this works too:
cat filelist | parallel -Xuj1 $EDITOR
in case you want to edit more than one file at a time (and you have set $EDITOR to your favorite editor).
-o for xargs (as mentioned elsewhere) only works for some versions of xargs (notably it does not work for GNU xargs).
Watch the intro video to learn more about GNU Parallel http://www.youtube.com/watch?v=OpaiGYxkSuQ
Interesting! I see the exact same behaviour on Mac as well, doing something as simple as:
ls *.h | xargs vim
Apparently, it is a problem with vim:
http://talideon.com/weblog/2007/03/xargs-vim.cfm