I often use this list command in Unix (AIX / KSH):
ls -Artl
It displays the files as this:
-rw-r--r-- 1 myuser mygroup 0 Apr 2 11:59 test1.txt
-rw-r--r-- 1 myuser mygroup 0 Apr 2 11:59 test2.txt
I would like to modify the command such a way that the full path of the file is displayed. For example:
-rw-r--r-- 1 myuser mygroup 0 Apr 2 11:59 /usr/test1.txt
-rw-r--r-- 1 myuser mygroup 0 Apr 2 11:59 /usr/test2.txt
Any ideas?
I found several resolution methods using pwd or find but - as far as I see - this does not work work if I want to keep the ls options.
What about this trick...
ls -lrt -d -1 $PWD/{*,.*}
OR
ls -lrt -d -1 $PWD/*
I think this has problems with empty directories but if another poster has a tweak I'll update my answer. Also, you may already know this but this is probably be a good candidate for an alias given it's lengthiness.
[update] added some tweaks based on comments, thanks guys.
[update] as pointed out by the comments you may need to tweek the matcher expressions depending on the shell (bash vs zsh). I've re-added my older command for reference.
Try this, works for me: ls -d /a/b/c/*
Use this command:
ls -ltr /mig/mthome/09/log/*
instead of:
ls -ltr /mig/mthome/09/log
to get the full path in the listing.
I use this command:
ls -1 | xargs readlink -f
optimized from spacedrop answer ...
ls $(pwd)/*
and you can use ls options
ls -alrt $(pwd)/*
simply use find tool.
find absolute_path
displays full paths on my Linux machine, while
find relative_path
will not.
I wrote a shell script called fullpath that contains this code, use it everyday:
#!/bin/sh
for i in $* ; do
echo $(pwd)/$i
done
Put it somewhere in your PATH, and make it executable(chmod 755 fullpath) then just use
fullpath file_or_directory
You can combine the find command and the ls command. Use the path (.) and selector (*) to narrow down the files you're after. Surround the find command in back quotes. The argument to -name is doublequote star doublequote in case you can't read it.
ls -lart `find . -type f -name "*" `
Related
I am trying to get a directory to replace an existing folder but can't get it done with mv - I believe there's a way and I just don't know it (yet). Even after consulting the man page and searching the web.
If /path/to/ only contains directory, the following command will move /path/to/directory (vanishes) to /path/to/folder
mv /path/to/directory /path/to/folder
It is basically a rename, which is what I try to achieve.
But if /path/to/folder already exists, the same command moves the /path/to/directory to /path/to/folder/directory.
I do not want to use cp command to avoid IO.
Instead of using cp to actually copy the data in each file, use ln to make "copies" of the pointers to the file.
ln /path/to/directory/* /path/to/folder && rm -rf /path/to/directory
Note this is slightly more atomic than using cp; each individual file appears in /path/to/folder in a single step (i.e., there is no chance that /path/to/folder/foo.txt is ever partially copied), but there is still a small window where some, but not all, files from /path/to/directory have been linked to folder. Also, the rm -rf is not atomic, but assuming no one is interested in directory, that's not an issue. (Although, as files from /path/to/directory are unlinked, you can see changes to the link counts of files under /path/to/foldoer changing from 2 to 1. It's unlikely that anyone will care about that.)
What you think of as a file is really just a file system entry to an otherwise anonymous file managed by the file system. For example, consider a simple example.
$ mkdir d
$ cd d
$ echo hello > file.txt
$ cp file.txt file_copy.txt
$ ln file.txt file_link.txt
$ ls -li
total 24
12890456377 -rw-r--r-- 2 chepner staff 6 Mar 3 12:46 file.txt
12890456378 -rw-r--r-- 1 chepner staff 6 Mar 3 12:47 file_copy.txt
12890456377 -rw-r--r-- 2 chepner staff 6 Mar 3 12:46 file_link.txt
The -i option adds each entries inode number (the first column) to the output; an inode can be thought of as a unique identifier for a file. In this output, you can see that file_copy.txt is an entirely new file, with a different inode than file.txt. file_link.txt has the exact same inode, meaning that file.txt and file_link.txt are simply two different names for the same thing. The number just before the owner is the link count; file.txt and file_link.txt both refer to a file with a link count of 2.
When you use rm, you are just removing a link to a file, not the file itself. A file is not removed until the link count is reduced to 0. To demonstrate, we'll remove file.txt and file_copy.txt.
$ rm file.txt file_copy.txt
$ ls -li
total 8
12890456377 -rw-r--r-- 1 chepner staff 6 Mar 3 12:46 file_link.txt
As you can see, the only link to file_copy is gone, so inode 12890456378 no longer appears in the output. (Whether or not the data is really gone is a matter of file-system implementation.) file_link.txt, though, still refers to the same file as before, but now with a link count of 1, because file.txt was removed.
Links to a file do not have to appear in the same directory; they can appear in any directory on the same file system, which is the only caveat using this trick. ln will, IIRC, give you an error if you try to create a link to a file on another file system.
I have a directory that contains sub-directories and other files and would like to update the date/timestamps recursively with the date/timestamp of another file/directory.
I'm aware that:
touch -r file directory
changes the date/timestamp for the file or directory with the others, but nothing within it. There's also the find version which is:
find . -exec touch -mt 201309300223.25 {} +\;
which would work fine if i could specify the actual file/directory and use anothers date/timestamp. Is there a simple way to do this? even better, is there a way to avoid changing/updating timestamps when doing a 'cp'?
even better, is there a way to avoid changing/updating timestamps when doing a 'cp'?
Yes, use cp with the -p option:
-p
same as --preserve=mode,ownership,timestamps
--preserve
preserve the specified attributes (default:
mode,ownership,timestamps), if possible additional attributes:
context, links, xattr, all
Example
$ ls -ltr
-rwxrwxr-x 1 me me 368 Apr 24 10:50 old_file
$ cp old_file not_maintains <----- does not preserve time
$ cp -p old_file do_maintains <----- does preserve time
$ ls -ltr
total 28
-rwxrwxr-x 1 me me 368 Apr 24 10:50 old_file
-rwxrwxr-x 1 me me 368 Apr 24 10:50 do_maintains <----- does preserve time
-rwxrwxr-x 1 me me 368 Sep 30 11:33 not_maintains <----- does not preserve time
To recursively touch files on a directory based on the symmetric file on another path, you can try something like the following:
find /your/path/ -exec touch -r $(echo {} | sed "s#/your/path#/your/original/path#g") {} \;
It is not working for me, but I guess it is a matter of try/test a little bit more.
In addition to 'cp -p', you can (re)create an old timestamp using 'touch -t'. See the man page of 'touch' for more details.
touch -t 200510071138 old_file.dat
I use a find command to find some kinds of files in bash. Everything goes fine unlness the result that is shown to me just contains the file name but not the (last modification) date of file. I tried to pipe it into ls or ls -ltr but it just does not show the filedate column in result, also I tried this:
ls -ltr | find . -ctime 1
but actually I didn't work.
Can you please guide me how can I view the filedate of files returned by a find command?
You need either xargs or -exec for this:
find . -ctime 1 -exec ls -l {} \;
find . -ctime 1 | xargs ls -l
(The first executes ls on every found file individually, the second bunches them up into one ore more big ls invocations, so that they may be formatted slightly better.)
If all you want is to display an ls like output you can use the -ls option of find:
$ find . -name resolv.conf -ls
1048592 8 -rw-r--r-- 1 root root 126 Dec 9 10:12 ./resolv.conf
If you want only the timestamp you'll need to look at the -printf option
$ find . -name resolv.conf -printf "%a\n"
Mon May 21 09:15:24 2012
find . -ctime 1 -printf '%t\t%p\n'
prints the datetime and file path, separated by a ␉ character.
If the glob */ only matches directories, then logically the extglob !(*/) should match non-directories; but this doesn't work. Is this a bug or am I missing something? Does this work on any shell?
Test 1 to prove that */ works
$ cd /tmp; ls -ld */
drwxr-xr-x 2 seand users 4096 Jan 1 15:59 test1//
drwxr-xr-x 2 seand users 4096 Jan 1 15:59 test2//
drwxr-xr-x 2 seand users 4096 Jan 1 15:59 test3//
Test 2 to show potential bug with !(*/)
$ cd /tmp; shopt -s extglob; ls -ld !(*/)
/bin/ls: cannot access !(*/): No such file or directory
In Bash, !() (like *, ?, *(), and #()) only applies to one path component. Thus, !(anything containing a / slash) doesn't work.
If you switch to zsh, you can use *(^/) to match all non-directories, or *(.) to match all plain files.
The answer to the specific question has already been given; and I am not sure if you really wanted another solution or if you were just interested to analyze the behavior, but one way to list all non-directories in the current folder is to use find:
find . ! -type d -maxdepth 1
I wonder how to list the content of a tar file only down to some level?
I understand tar tvf mytar.tar will list all files, but sometimes I would like to only see directories down to some level.
Similarly, for the command ls, how do I control the level of subdirectories that will be displayed? By default, it will only show the direct subdirectories, but not go further.
depth=1
tar --exclude="*/*" -tf file.tar
depth=2
tar --exclude="*/*/*" -tf file.tar
tar tvf scripts.tar | awk -F/ '{if (NF<4) print }'
drwx------ glens/glens 0 2010-03-17 10:44 scripts/
-rwxr--r-- glens/www-data 1051 2009-07-27 10:42 scripts/my2cnf.pl
-rwxr--r-- glens/www-data 359 2009-08-14 00:01 scripts/pastebin.sh
-rwxr--r-- glens/www-data 566 2009-07-27 10:42 scripts/critic.pl
-rwxr-xr-x glens/glens 981 2009-12-16 09:39 scripts/wiki_sys.pl
-rwxr-xr-x glens/glens 3072 2009-07-28 10:25 scripts/blacklist_update.pl
-rwxr--r-- glens/www-data 18418 2009-07-27 10:42 scripts/sysinfo.pl
Make sure to note, that the number is 3+ however many levels you want, because of the / in the username/group. If you just do
tar tf scripts.tar | awk -F/ '{if (NF<3) print }'
scripts/
scripts/my2cnf.pl
scripts/pastebin.sh
scripts/critic.pl
scripts/wiki_sys.pl
scripts/blacklist_update.pl
scripts/sysinfo.pl
it's only two more.
You could probably pipe the output of ls -R to this awk script, and have the same effect.
Another option is archivemount. You mount it, and cd into it. Then you can do anything with it just as with other filesystem.
$ archivemount /path/to/files.tgz /path/to/mnt/folder
It seems faster than the tar method.
It would be nice if we could tell the find command to look inside a tar file, but I doubt that is possible.
I quick and ugly (and not foolproof) way would be to limit the number of directory separators, for example:
$ tar tvf myfile.tar | grep -E '^[^/]*(/[^/]*){1,2}$'
The 2 tells to display not more than 2 slashes (in my case one is already generated by the user/group separator), and hence, to display files at depth at most one. You might want to try with different numbers in place of the 2.
I agree with leonbloy's answer - there's no way to do this straightforwardly within the tarball itself.
Regarding the second part of your question, ls does not have a max depth option. You can recurse everything with ls -R, but that's often not very useful.
However you can do this with both find and tree. For example to list files and directories one level deep, you can do
find -maxdepth 2
or
tree -L 2
tree also has a -d option, which recursively lists directories, but not files, which I find much more useful than -L, in general.
I was able to show only the directory names at a particular depth using grep:
for depth 3:
tar -tf mytar.tar | grep -Ex '([^/]+/){3}'
or for depth $DEPTH:
tar -tf mytar.tar | grep -Ex '([^/]+){$DEPTH}/'
You can speed that up by combining grep with --exclude from #sacapeao's accepted answer.
for depth 3:
tar --exclude '*/*/*/*/*' -tf mytar.tar | grep -Ex '([^/]+/){3}'