To show only file name without the entire directory path - shell

ls /home/user/new/*.txt prints all txt files in that directory. However it prints the output as follows:
[me#comp]$ ls /home/user/new/*.txt
/home/user/new/file1.txt /home/user/new/file2.txt /home/user/new/file3.txt
and so on.
I want to run the ls command not from the /home/user/new/ directory thus I have to give the full directory name, yet I want the output to be only as
[me#comp]$ ls /home/user/new/*.txt
file1.txt file2.txt file3.txt
I don't want the entire path. Only filename is needed. This issues has to be solved using ls command, as its output is meant for another program.

ls whateveryouwant | xargs -n 1 basename
Does that work for you?
Otherwise you can (cd /the/directory && ls) (yes, parentheses intended)

No need for Xargs and all , ls is more than enough.
ls -1 *.txt
displays row wise

There are several ways you can achieve this. One would be something like:
for filepath in /path/to/dir/*
do
filename=$(basename $filepath)
... whatever you want to do with the file here
done

Use the basename command:
basename /home/user/new/*.txt

(cd dir && ls)
will only output filenames in dir. Use ls -1 if you want one per line.
(Changed ; to && as per Sactiw's comment).

you could add an sed script to your commandline:
ls /home/user/new/*.txt | sed -r 's/^.+\///'

A fancy way to solve it is by using twice "rev" and "cut":
find ./ -name "*.txt" | rev | cut -d '/' -f1 | rev

The selected answer did not work for me, as I had spaces, quotes and other strange characters in my filenames. To quote the input for basename, you should use:
ls /path/to/my/directory | xargs -n1 -I{} basename "{}"
This is guaranteed to work, regardless of what the files are called.

I prefer the base name which is already answered by fge.
Another way is :
ls /home/user/new/*.txt|awk -F"/" '{print $NF}'
one more ugly way is :
ls /home/user/new/*.txt| perl -pe 's/\//\n/g'|tail -1

just hoping to be helpful to someone as old problems seem to come back every now and again and I always find good tips here.
My problem was to list in a text file all the names of the "*.txt" files in a certain directory without path and without extension from a Datastage 7.5 sequence.
The solution we used is:
ls /home/user/new/*.txt | xargs -n 1 basename | cut -d '.' -f1 > name_list.txt

There are lots of way we can do that and simply you can try following.
ls /home/user/new | tr '\n' '\n' | grep .txt
Another method:
cd /home/user/new && ls *.txt

Here is another way:
ls -1 /home/user/new/*.txt|rev|cut -d'/' -f1|rev

You could also pipe to grep and pull everything after the last forward slash. It looks goofy, but I think a defensive grep should be fine unless (like some kind of maniac) you have forward slashes within your filenames.
ls folderpathwithcriteria | grep -P -o -e "[^/]*$"

When you want to list names in a path but they have different file extensions.
me#server:/var/backups$ ls -1 *.zip && ls -1 *.gz

Related

Exclude files listed in '.hidden' when using 'ls'

In my home folder, I have a file called '.hidden' in which I put the names of a few files and folders that I don't want to see in that directory. I can't change their names, ie: I can't put the dot '.' at the beginning.
When I give 'ls' in a terminal, it lists all files and folders including those listed in '.hidden'. I want 'ls' to exclude them.
Is there a way for doing this?
I have to pass the content of the file '.hidden' to ls. Something like:
ls --ignore= [SOME MAGIC HERE] $(tr '\n' ' ' < .hidden)
Thank you! :)
Here's how I solved this:
alias ls='exclusion=; while read p ; do exclusion=$exclusion" -I$p"; done < ~/.hidden; ls $exclusion --color=auto --group-directories-first'
I've got another solution! sed, tr, and xargs to the rescue!
alias ls="sed 's/^/-I \"/' .hidden 2>/dev/null | sed 's/$/\"/' | tr '\n' ' ' | xargs ls --color=auto"

Counting number of occurrences in several files

I want to check the number of occurrences of, let's say, the character '[', recursively in all the files of a directory that have the same extension, e.g. *.c. I am working with the SO Solaris in Unix.
I tried some solutions that are given in other posts, and the only one that works is this one, since with this OS I cannot use the command grep -o:
sed 's/[^x]//g' filename | tr -d '012' | wc -c
Where x is the occurrence I want to count. This one works but it's not recursive, is there any way to make it recursive?
You can get a recursive listing from find and execute commands with its -exec argument.
I'd suggest like:
find . -name '*.c' -exec cat {} \; | tr -c -d ']' | wc -c
The -c argument to tr means to use the opposite of the string supplied -- i.e. in this case, match everything but ].
The . in the find command means to search in the current directory, but you can supply any other directory name there as well.
I hope you have nawk installed. Then you can just:
nawk '{a+=gsub(/\]/,"x")}END{print a}' /path/*
You can write a snippet code itself. I suggest you to run the following:
awk '{for (i=1;i<=NF;i++) if ($i=="[") n++} END{print n}' *.c
This will search for "[" in all files in the present directory and print the number of occurrences.

SHELL printing just right part after . (DOT)

I need to find just extension of all files in directory (if there are 2 same extensions, its just one). I already have it. But the output of my script is like
test.txt
test2.txt
hello.iso
bay.fds
hellllu.pdf
Im using grep -e -e '.' and it just highlight DOTs
And i need just these extensions give in one variable like txt,iso,fds,pdf
Is there anyone who could help? I already had it one time but i had it on array. Today I found out It's has to work on dash too.
You can use find with awk to get all unique extensions:
find . -type f -name '?*.?*' -print0 |
awk -F. -v RS='\0' '!seen[$NF]++{print $NF}'
can be done with find as well, but I think this is easier
for f in *.*; do echo "${f##*.}"; done | sort -u
if you want to assign a comma separated list of the unique extensions, you can follow this
ext=$(for f in *.*; do echo "${f##*.}"; done | sort -u | paste -sd,)
echo $ext
csv,pdf,txt
alternatively with ls
ls -1 *.* | rev | cut -d. -f1 | rev | sort -u | paste -sd,
rev/rev is required if you have more than one dot in the filename, assuming the extension is after the last dot. For any other directory simply change the part *.* to dirpath/*.* in all scripts.
I'm not sure I understand your comment. If you don't assign to a variable, by default it will print to the output. If you want to pass directory name as a variable to a script, put the code into a script file and replace dirpath with $1, assuming that will be your first argument to the script
#!/bin/bash
# print unique extension in the directory passed as an argument, i.e.
ls -1 "$1"/*.* ...
if you have sub directories with extensions above scripts include them as well, to limit only to file types replace ls .. with
find . -maxdepth 1 -type f -name "*.*" | ...

Manipulating a file - bash

I need some guidance manipulating a text file that is the result of a diff. I only want those results listed after the > delimiter (which are file names) and then I will add a path to the file name for further work.
I am not dealing with large files.
I am hoping to do it all in place.
Essentially I want to take something like this
96a97,98
> SCR-33333.sql
> SCR-33333-WEB.sql
and create an action like
cp /add/this/path/SCR-33333.sql /to/somewhere/else
Can anyone please give me a quick example I can run with?
Well, you could try this, bearing in mind that it'll only work if filenames do not contains spaces...
diff this that | awk '/^>/{print "/add/this/path/" $2}' | xargs -i cp {} /to/somewhere/else
(note: this is a one-liner command. ignore wrapping caused by web browser.)
grep ">" dummy.txt | cut -f 2 -d ' ' | xargs -I{} cp /add/this/path/{} somewhere
where 'dummy.txt' is your diff file.

Listing files in date order with spaces in filenames

I am starting with a file containing a list of hundreds of files (full paths) in a random order. I would like to list the details of the ten latest files in that list. This is my naive attempt:
$ ls -las -t `cat list-of-files.txt` | head -10
That works, so long as none of the files have spaces in, but fails if they do as those files are split up at the spaces and treated as separate files. File "hello world" gives me:
ls: hello: No such file or directory
ls: world: No such file or directory
I have tried quoting the files in the original list-of-files file, but the here-document still splits the files up at the spaces in the filenames, treating the quotes as part of the filenames:
$ ls -las -t `awk '{print "\"" $0 "\""}' list-of-files.txt` | head -10
ls: "hello: No such file or directory
ls: world": No such file or directory
The only way I can think of doing this, is to ls each file individually (using xargs perhaps) and create an intermediate file with the file listings and the date in a sortable order as the first field in each line, then sort that intermediate file. However, that feels a bit cumbersome and inefficient (hundreds of ls commands rather than one or two). But that may be the only way to do it?
Is there any way to pass "ls" a list of files to process, where those files could contain spaces - it seems like it should be simple, but I'm stumped.
Instead of "one or more blank characters", you can force bash to use another field separator:
OIFS=$IFS
IFS=$'\n'
ls -las -t $(cat list-of-files.txt) | head -10
IFS=$OIFS
However, I don't think this code would be more efficient than doing a loop; in addition, that won't work if the number of files in list-of-files.txt exceeds the max number of arguments.
Try this:
xargs -a list-of-files.txt ls -last | head -n 10
I'm not sure whether this will work, but did you try escaping spaces with \? Using sed or something. sed "s/ /\\\\ /g" list-of-files.txt, for example.
This worked for me:
xargs -d\\n ls -last < list-of-files.txt | head -10

Resources