I use bash and I sometimes have to type some very long commands for certain tasks. These long commands are not regularly used and generally gets overwritten in .bash_history file by the time I use them again. How can I add certain commands to .bash_history permanently?
Thanks.
As others have mentioned, the usual way to do that would be to store your long commands as either aliases or bash functions. A nice way to organise them would the to put them all in a file (say $HOME/.custom_funcs) then source it from .bashrc.
If you really really want to be able to load commands into your bash history, you can use the -r option of the history command.
From the man page:
-r Read the current history file and append its contents to the history list.
Just store all your entries in a file, and whenever you need your custom history loaded simply run history -r <your_file>.
Here's a demo:
[me#home]$ history | tail # see current history
1006 history | tail
1007 rm x
1008 vi .custom_history
1009 ls
1010 history | tail
1011 cd /var/log
1012 tail -f messages
1013 cd
1014 ls -al
1015 history | tail # see current history
[me#home]$ cat $HOME/.custom_history # content of custom history file
echo "hello world"
ls -al /home/stack/overflow
(cd /var/log/messages; wc -l *; cd -)
[me#home]$ history -r $HOME/.custom_history # load custom history
[me#home]$ history | tail # see updated history
1012 tail -f messages
1013 cd
1014 ls -al
1015 history | tail # see current history
1016 cat .custom_history
1017 history -r $HOME/.custom_history
1018 echo "hello world"
1019 ls -al /home/stack/overflow
1020 (cd /var/log/messages; wc -l *; cd -)
1021 history | tail # see updated history
Note how entries 1018-1020 weren't actually run but instead were loaded from the file.
At this point you can access them as you would normally using the history or ! commands, or the Ctrl+r shortcuts and the likes.
How about just extending the size of your bash history file with the shell variable
HISTFILESIZE
Instead of the default 500 make it something like 2000
The canonical answer is to create scripts containing these commands.
Edit Supposing you have the following history entries;
find /var/www -name '*.html' -exec fgrep '<title>' {} /dev/null \;
find ~/public_html -name '*.php' -exec fgrep include {} /dev/null \;
... you can try to isolate the parameters into a function something like this;
r () {
find "$1" -name "*.$2" -exec fgrep "$3" {} /dev/null \;
}
... which you could use like this, to repeat the history entries from above:
r /var/www html '<title>'
r ~/public_html php include
Obviously, the step is then not very long to create a proper script with defaults, parameter validation, etc. (Hint: you could usefully default to the current directory for the path, and no extension for the file name, and add options like --path and --ext to override the defaults when you want to; then there will be only one mandatory argument to the script.)
Typically, you would store the script in $HOME/bin and make sure this directory is added to your PATH from your .profile or similar. For functions, these are usually defined in .profile, or a separate file which is sourced from this file.
Having it in a central place also helps develop it further; for example, for precision and perhaps some minor added efficiency, you might want to add -type f to the find command; now there is only one place to remember to edit, and you will have it fixed for good.
You can create an alias for your command.
alias myalas='...a very long command here...'
Related
I have a directory like this:
A/B/C/D/data. Inside this exists folders like, 202012, 202013, etc.
Now, I want to find all folders starting with 2020 inside data folder and then obtain the name of the one which was created most recently. So, I did this,
find /A/B/C/D/data/ -name "2020*" -type d. This gave me all folders starting with 2020. Now, when I am piping the output of this to ls -t | head -1 using the | operator, it simply returns the data folder. My expectation is that it should return the latest folder inside the data folder.
I am doing like this,
find /A/B/C/D/data/ -name "2020*" -type d | ls -t | head -1
How can I do this?
Thanks!
shopt -s globstar # Enable **
ls -dFt /A/B/C/D/data/**/2020*
Note that this would also list files starting with 2020, not only directories. For this reason, I used the -F flag. This appends a / to each directory, so you can distinguish files and directories easier. If you are sure, that your directory entries don't contain newline characters or slashes, you can pipe the output to | grep '/$' and get only the directories.
If you need this for a quick inspection in an interactive shell, I would do a ls -dFtr .... to get them sorted in reverse order. This makes sure that the ones you are interested in, show up at the end of the list.
You need to run the output of find through xargs to give it as command line arguments to ls:
find /A/B/C/D/data/ -name "2020*" -type d | xargs ls -t -d | head -1
At the BASH prompt I can find files then append xargs to do further stuff. e.g. find ... | xargs rm {}
However, sometimes there is a manual intermediate step: I use fzf to refine the find results.
I would like to use this filtered list of files to create an incomplete xargs command at the terminal.
For example, if my find command produces
file1 file2 file3, and my fzf narrows this down to file2 file3, I would like the script to create an incomplete line at the terminal like this:
file2 file3 |xargs -0 --other-standard-options
but i don't want the command to flush (I don't know what the correct term is) as if I had pressed enter. I want to be able to complete the command myself (e.g. rm {}), after seeing the list of files printed on the line.
The find command will need to use the print0 option.
I suppose the script would look something like this:
find . | fzf -m | *echo incomplete xargs command*.
the echo -n command is not what I want: it still passes the command to BASH shell.
Maybe there is a better way of using find, then manually checking and filtering, then executing a command like rm or mv, and if so, that would be an acceptable answer.
The number of files I need to be able to deal with after the filtering is small (<100).
So it looks like fzf has some evn settings that can be used to filter the set of data that comes back. There is also options env as well. I would take a look at the following evn variables:
FZF_DEFAULT_COMMAND and FZF_DEFAULT_OPTS.
FZF_DEFAULT_COMMAND will allow you to use a different command to filter down your file set. Check out the following here
Meaning you may not need to use find and pipe that to the fzf command. You could just start with the fzf command itself and set the proper evn variable.
To use the interactive mode to remove and review files on the same line you could do something like the following.
find . -type f -name '*test*' | fzf -m | xargs rm
for f in $(find . -type f -name '*test*' | fzf -m); do
read -p ".... ${f} Enter command to insert on the dots "
echo "$REPLY ${f}"
$REPLY "${f}"
done
The following seems to work, leaving the user to complete the command at the prompt:
out=$(find . |fzf -m )
prefix="echo ";
suffix="| xargs ";
files="$(echo "${out}" | sed -e 's:^\.:"\.:g' -e 's:$:":g'|tr '\n' ' ' )";
cmd="${prefix}${files}${suffix}" ;
read -e -i "$cmd"; eval "$REPLY";
Explanation: fzf outputs filenames which I wrap in double quotes for the sake of safety; the terminal command I want to create looks like this:
echo "file1" "file2" "file3"|xargs ....
All the real credit goes to #meuh here
I want to recursively rename all files in directory path by changing their prefix.
For Example
XYZMyFile.h
XYZMyFile.m
XYZMyFile1.h
XYZMyFile1.m
XYZMyFile2.h
XYZMyFile2.m
TO
ABCMyFile.h
ABCMyFile.m
ABCMyFile1.h
ABCMyFile1.m
ABCMyFile2.h
ABCMyFile2.m
These files are under a directory structure with many layers. Can someone help me with a shell script for this bulk task?
A different approach maybe:
ls *.{h,m} | while read a; do n=ABC$(echo $a | sed -e 's/^XYZ//'); mv $a $n; done
Description:
ls *.{h,m} --> Find all files with .h or .m extension
n=ABC --> Add a ABC prefix to the file name
sed -e 's/^XYZ//' --> Removes the XYZ prefix from the file name
mv $a $n --> Performs the rename
Set globstar first and then use rename like below:
# shopt -s globstar # This will cause '**' to expand to each and everything
# ls -R
.:
nXYZ1.c nXYZ2.c nXYZ3.c subdir XYZ1.m XYZ2.m XYZ3.m
nXYZ1.h nXYZ2.h nXYZ3.h XYZ1.c XYZ2.c XYZ3.c
nXYZ1.m nXYZ2.m nXYZ3.m XYZ1.h XYZ2.h XYZ3.h
./subdir:
nXYZ1.c nXYZ1.m nXYZ2.h nXYZ3.c nXYZ3.m XYZ1.h XYZ2.c XYZ2.m XYZ3.h
nXYZ1.h nXYZ2.c nXYZ2.m nXYZ3.h XYZ1.c XYZ1.m XYZ2.h XYZ3.c XYZ3.m
# rename 's/^XYZ(.*.[mh])$/ABC$1/;s/^([^\/]*\/)XYZ(.*.[mh])$/$1ABC$2/' **
# ls -R
.:
ABC1.h ABC2.m nXYZ1.c nXYZ2.c nXYZ3.c subdir XYZ3.c
ABC1.m ABC3.h nXYZ1.h nXYZ2.h nXYZ3.h XYZ1.c
ABC2.h ABC3.m nXYZ1.m nXYZ2.m nXYZ3.m XYZ2.c
./subdir:
ABC1.h ABC2.h ABC3.h nXYZ1.c nXYZ1.m nXYZ2.h nXYZ3.c nXYZ3.m XYZ2.c
ABC1.m ABC2.m ABC3.m nXYZ1.h nXYZ2.c nXYZ2.m nXYZ3.h XYZ1.c XYZ3.c
# shopt -u globstar # Unset gobstar
This may be the simplest way to achieve your objective.
Note1 : Here I am not changing nXYZ to nABC as you have noticed. If they are meant to be changed the simplified rename command would be
rename 's/XYZ(.*.[mh])$/ABC$1/' **
Note2 : The question has mentioned nothing about multiple occurrences of XYZ. So nothing done in this regard.
Easy find and rename (the binary in /usr/bin, not the Perl function mentioned)
Yes, there is a command to do this non-recursive already.
rename XYZ ABC XYZ*
rename --help
Usage:
rename [options] expression replacement file...
Options:
-v, --verbose explain what is being done
-s, --symlink act on symlink target
-h, --help display this help and exit
-V, --version output version information and exit
For more details see rename(1).
edit: missed the "many layers of directory" part of the question, b/c it's a little messy. Adding the find.
Easiest to remember:
find . -type f -name "*.pdf" -exec rename XYZ ABC {} \;
Probably faster to finish:
find . -type d -not -path "*/\.*" -not -name ".*" -exec rename XYZ ABC {}/*.pdf \;
I'm not sure how to get easier than one command line of code.
For non-recursive, you can use rename which is a perl script:
rename -v -n 's/^.+(?=MyFile)/what-you-want/' *.{h,m}
test:
dir > ls | cat -n
1 XYZMyFile1.h
2 XYZMyFile1.m
3 XYZMyFile.h
4 XYZMyFile.m
dir >
dir > rename -v -n 's/^.+(?=MyFile)/what-you-want/' *.{h,m}
rename(XYZMyFile1.h, what-you-wantMyFile1.h)
rename(XYZMyFile1.m, what-you-wantMyFile1.m)
rename(XYZMyFile.h, what-you-wantMyFile.h)
rename(XYZMyFile.m, what-you-wantMyFile.m)
dir >
and for recursive,use find + this command
If you do not have access to rename, you can use perl directly like so:
perl -le '($old=$_) && s/^xzy/abc/g && rename($old,$_) for <*.[mh]>'
and here is a screen-shot
and with renrem, a CLI I developed using C++, specifically for renaming
I am flattening a directory of nested folders/picture files down to a single folder. I want to move all of the nested files up to the root level.
There are 3,381 files (no directories included in the count). I calculate this number using these two commands and subtracting the directory count (the second command):
find ./ | wc -l
find ./ -type d | wc -l
To flatten, I use this command:
find ./ -mindepth 2 -exec mv -i -v '{}' . \;
Problem is that when I get a count after running the flatten command, my count is off by 46. After going through the list of files before and after (I have a backup), I found that the mv command is overwriting files sometimes even though I'm using -i.
Here's details from the log for one of these files being overwritten...
.//Vacation/CIMG1075.JPG -> ./CIMG1075.JPG
..more log
..more log
..more log
.//dog pics/CIMG1075.JPG -> ./CIMG1075.JPG
So I can see that it is overwriting. I thought -i was supposed to stop this. I also tried a -n and got the same number. Note, I do have about 150 duplicate filenames. Was going to manually rename after I flattened everything I could.
Is it a timing issue?
Is there a way to resolve?
NOTE: it is prompting me that some of the files are overwrites. On those prompts I just press Enter so as not to overwrite. In the case above, there is no prompt. It just overwrites.
Apparently the manual entry clearly states:
The -n and -v options are non-standard and their use in scripts is not recommended.
In other words, you should mimic the -n option yourself. To do that, just check if the file exists and act accordingly. In a shell script where the file is supplied as the first argument, this could be done as follows:
[ -f "${1##*/}" ]
The file, as first argument, contains directories which can be stripped using ##*/. Now simply execute the mv using ||, since we want to execute when the file doesn't exist.
[ -f "${1##*/}" ] || mv "$1" .
Using this, you can edit your find command as follows:
find ./ -mindepth 2 -exec bash -c '[ -f "${0##*/}" ] || mv "$0" .' '{}' \;
Note that we now use $0 because of the bash -c usage. It's first argument, $0, can't be the script name because we have no script. This means the argument order is shifted with respect to a usual shell script.
Why not check if file exists, prior move? Then you can leave the file where it is or you can rename it or do something else...
Test -f or, [] should do the trick?
I am on tablet and can not easyly include the source.
I've got to get a directory listing that contains about 2 million files, but when I do an ls command on it nothing comes back. I've waited 3 hours. I've tried ls | tee directory.txt, but that seems to hang forever.
I assume the server is doing a lot of inode sorting. Is there any way to speed up the ls command to just get a directory listing of filenames? I don't care about size, dates, permission or the like at this time.
ls -U
will do the ls without sorting.
Another source of slowness is --color. On some linux machines, there is a convenience alias which adds --color=auto' to the ls call, making it look up file attributes for each file found (slow), to color the display. This can be avoided by ls -U --color=never or \ls -U.
I have a directory with 4 million files in it and the only way I got ls to spit out files immediately without a lot of churning first was
ls -1U
Try using:
find . -type f -maxdepth 1
This will only list the files in the directory, leave out the -type f argument if you want to list files and directories.
This question seems to be interesting and I was going through multiple answers that were posted. To understand the efficiency of the answers posted, I have executed them on 2 million files and found the results as below.
$ time tar cvf /dev/null . &> /tmp/file-count
real 37m16.553s
user 0m11.525s
sys 0m41.291s
------------------------------------------------------
$ time echo ./* &> /tmp/file-count
real 0m50.808s
user 0m49.291s
sys 0m1.404s
------------------------------------------------------
$ time ls &> /tmp/file-count
real 0m42.167s
user 0m40.323s
sys 0m1.648s
------------------------------------------------------
$ time find . &> /tmp/file-count
real 0m2.738s
user 0m1.044s
sys 0m1.684s
------------------------------------------------------
$ time ls -U &> /tmp/file-count
real 0m2.494s
user 0m0.848s
sys 0m1.452s
------------------------------------------------------
$ time ls -f &> /tmp/file-count
real 0m2.313s
user 0m0.856s
sys 0m1.448s
------------------------------------------------------
To summarize the results
ls -f command ran a bit faster than ls -U. Disabling color might have caused this improvement.
find command ran third with an average speed of 2.738 seconds.
Running just ls took 42.16 seconds. Here in my system ls is an alias for ls --color=auto
Using shell expansion feature with echo ./* ran for 50.80 seconds.
And the tar based solution took about 37 miuntes.
All tests were done seperately when system was in idle condition.
One important thing to note here is that the file lists are not printed in the terminal rather
they were redirected to a file and the file count was calculated later with wc command.
Commands ran too slow if the outputs where printed on the screen.
Any ideas why this happens ?
This would be the fastest option AFAIK: ls -1 -f.
-1 (No columns)
-f (No sorting)
Using
ls -1 -f
is about 10 times faster and it is easy to do (I tested with 1 million files, but my original problem had 6 800 000 000 files)
But in my case I needed to check if some specific directory contains more than 10 000 files. If there were more than 10 000 files, I am not anymore interested that how many files there is. I just quit the program so that it will run faster and wont try to read the rest one-by-one. If there are less than 10 000, I will print the exact amount. Speed of my program is quite similar to ls -1 -f if you specify bigger value for parameter than amount of files.
You can use my program find_if_more.pl in current directory by typing:
find_if_more.pl 999999999
If you are just interested if there are more than n files, script will finish faster than ls -1 -f with very large amount of files.
#!/usr/bin/perl
use warnings;
my ($maxcount) = #ARGV;
my $dir = '.';
$filecount = 0;
if (not defined $maxcount) {
die "Need maxcount\n";
}
opendir(DIR, $dir) or die $!;
while (my $file = readdir(DIR)) {
$filecount = $filecount + 1;
last if $filecount> $maxcount
}
print $filecount;
closedir(DIR);
exit 0;
You can redirect output and run the ls process in the background.
ls > myls.txt &
This would allow you to go on about your business while its running. It wouldn't lock up your shell.
Not sure about what options are for running ls and getting less data back. You could always run man ls to check.
This is probably not a helpful answer, but if you don't have find you may be able to make do with tar
$ tar cvf /dev/null .
I am told by people older than me that, "back in the day", single-user and recovery environments were a lot more limited than they are nowadays. That's where this trick comes from.
I'm assuming you are using GNU ls?
try
\ls
It will unalias the usual ls (ls --color=auto).
If a process "doesn't come back", I recommend strace to analyze how a process is interacting with the operating system.
In case of ls:
$strace ls
you would have seen that it reads all directory entries (getdents(2)) before it actually outputs anything. (sorting… as it was already mentioned here)
Things to try:
Check ls isn't aliased?
alias ls
Perhaps try find instead?
find . \( -type d -name . -prune \) -o \( -type f -print \)
Hope this helps.
Some followup:
You don't mention what OS you're running on, which would help indicate which version of ls you're using. This probably isn't a 'bash' question as much as an ls question. My guess is that you're using GNU ls, which has some features that are useful in some contexts, but kill you on big directories.
GNU ls Trying to have prettier arranging of columns. GNU ls tries to do a smart arrange of all the filenames. In a huge directory, this will take some time, and memory.
To 'fix' this, you can try:
ls -1 # no columns at all
find BSD ls someplace, http://www.freebsd.org/cgi/cvsweb.cgi/src/bin/ls/ and use that on your big directories.
Use other tools, such as find
There are several ways to get a list of files:
Use this command to get a list without sorting:
ls -U
or send the list of files to a file by using:
ls /Folder/path > ~/Desktop/List.txt
What partition type are you using?
Having millions of small files in one directory it might be a good idea to use JFS or ReiserFS which have better performance with many small sized files.
How about find ./ -type f (which will find all files in the currently directory)? Take off the -type f to find everything.
You should provide information about what operating system and the type of filesystem you are using. On certain flavours of UNIX and certain filesystems you might be able to use the commands ff and ncheck as alternatives.
I had a directory with timestamps in the file names. I wanted to check the date of the latest file and found find . -type f -maxdepth 1 | sort | tail -n 1 to be about twice as fast as ls -alh.
Lots of other good solutions here, but in the interest of completeness:
echo *
You can also make use of xargs. Just pipe the output of ls through xargs.
ls | xargs
If that doesn't work and the find examples above aren't working, try piping them to xargs as it can help the memory usage that might be causing your problems.