The problem I faced with is that wildcard quantificator doesn't pass through xargs to command for some reason.
Assume, that we have file file1 with such content:
-l
-a
f*
we need to pass arguments to ls via xargs.
cat file1 | xargs -n3 ls
The output equivalent to ls -la command, with additon from terminal, that
ls: cannot access f*: No such file or directory.
But the file is in directory (ls -la f* returns suitable output)
If we substitute f* to file1 for example, we'll have right output too.
Can you explain me, why does it happen? Thanks.
EDIT1:
It seems interesting to add, how we can pass arguments from file file1 through shell interpreter to ls command. Below is the example:
ls `xargs -n3 < file1`
Now shell expansion is made before ls invocation resulting in same output as for ls -la f*
The f* expression is also known as a shell glob, and is supposed to be interpreted by the shell. You can try it out independently e.g. by running echo f*.
When you run ls -la f* from the shell, the shell interprets it according to your directory contents, and calls ls with the expanded version, like: ls -la file1 file2 file3. You can get some commands confused about this if it matches no file and the command expects one.
But when you pass that argument to ls through xargs, the shell doesn't get a chance to expand it, and ls feels like it's invoked exactly as ls -la f*. It recognizes -la as options, and f* as a filename to list. That file happens not to exist, so it fails with the error message you see.
You could get it to fail on a more usual filename:
$ ls non_existing_file
ls: cannot access non_existing_file: No such file or directory.
Or you could get it to succeed by actually having a file of that name:
$ touch 'f*'
$ xargs ls -la <<< 'f*'
-rw-rw-r-- 1 jb jb 0 2013-11-13 23:08 f*
$ rm 'f*'
Notice how I had to use single quotes so that the shell would not interpret f* as a glob when creating and deleting it.
Unfortuately, you can't get it to expand it when it gets passed directly from xargs to ls, since there's no shell there.
The contents of file are passed to xargs via standard input, so the shell never sees them to process the glob. xargs then passes them to ls, again without the shell ever seeing them, so f* is treated as a literal 2-character file name.
Related
I have the following code, which removes old files in a directory based on their timestamps:
ls -tp | grep -v '/$' | tail -n +2 | xargs -I {} rm -- {}
I am trying to make an executable script out of this and I don't want to cd into the directory where the above command should be run, but rather simply pass the path e.g. /tmp/backups/ to it.
How would I do this? Attaching the path directly after each command ls, grep, tail, xargs and rm ?
Assuming that your path is the first parameter of your script, you could do a
cd "${1:-/tmp/backups}" # Use the parameter, supply default if none
# Do your cleanup here
If you have afterwards more stuff to do in the original working directory, just do a
cd - >/dev/null
# Do what ever you need to do in the original working directory
The sole purpose of the redirection of stdout into the bit bucket is, because cd - by default prints to stdout the directory it changes into.
ls -la and ls -l both provide more information than the ls command. However,
as their outputs are very similar,
I'm not clear what the difference between the two commands is.
What is the difference between ls -la and ls -l in git bash?
Here is the meaning of the flags below. Both are not the same. In the first case (ls -l) hidden files/folders will not be listed while in the second case (ls -la) hidden files/folders will be shown.
-l
-l use a long listing format
-a
-a, --all do not ignore entries starting with .
which means show hidden files/directories as well.
I'm starting in the shell script.I'm need to make the checksum of a lot of files, so I thought to automate the process using an shell script.
I make to scripts: the first script uses an recursive ls command with an egrep -v that receive as parameter the path of file inputed by me, these command is saved in a ambient variable that converts the output in a string, follow by a loop(for) that cut the output's string in lines and pass these lines as a parameter when calling the second script; The second script take this parameter and pass they as parameter to hashdeep command,wich in turn is saved in another ambient variable that, as in previous script,convert the output's command in a string and cut they using IFS,lastly I'm take the line of interest and put then in a text file.
The output is:
/home/douglas/Trampo/shell_scripts/2016-10-27-001757.jpg: No such file
or directory
----Checksum FILE: 2016-10-27-001757.jpg
----Checksum HASH:
the issue is: I sets as parameter the directory ~/Pictures but in the output error they return another directory,/home/douglas/Trampo/shell_scripts/(the own directory), in this case, the file 2016-10-27-001757.jpg is in the ~/Pictures directory,why the script is going in its own directory?
First script:
#/bin/bash
arquivos=$(ls -R $1 | egrep -v '^d')
for linha in $arquivos
do
bash ./task2.sh $linha
done
second script:
#/bin/bash
checksum=$(hashdeep $1)
concatenado=''
for i in $checksum
do
concatenado+=$i
done
IFS=',' read -ra ADDR <<< "$concatenado"
echo
echo '----Checksum FILE:' $1
echo '----Checksum HASH:' ${ADDR[4]}
echo
echo ${ADDR[4]} >> ~/Trampo/shell_scripts/txt2.txt
I think that's...sorry about the English grammatic errors.
I hope that the question has become clear.
Thanks ins advanced!
There are several wrong in the first script alone.
When running ls in recursive mode using -R, the output is listed per directory and each file is listed relative to their parent instead of full pathname.
ls -R doesn't list the directory in long format as implied by | grep -v ^d where it seems you are looking for files (non directories).
In your specific case, the missing file 2016-10-27-001757.jpg is in a subdirectory but you lost the location by using ls -R.
Do not parse the output of ls. Use find and you won't have the same issue.
First script can be replaced by a single line.
Try this:
#!/bin/bash
find $1 -type f -exec ./task2.sh "{}" \;
Or if you prefer using xargs, try this:
#!/bin/bash
find $1 -type f -print0 | xargs -0 -n1 -I{} ./task2.sh "{}"
Note: enclosing {} in quotes ensures that task2.sh receives a complete filename even if it contains spaces.
In task2.sh the parameter $1 should also be quoted "$1".
If task2.sh is executable, you are all set. If not, add bash in the line so it reads as:
find $1 -type f -exec bash ./task2.sh "{}" \;
task2.sh, though not posted in the original question, is not executable. It has a missing execute permission.
Add execute permission to it by running chmod like:
chmod a+x task2.sh
Goodluck.
Lets say I have the fallowing cmd:
ls | grep dir
If there are folders which names contains dir then ill see them.
If there arent, then I wont see any output at all.
Now, What i want is to see the output of the ls command, and then also see the final output after the grep.
Lets say something like this:
>>ls | grep dir
filea fileb filec
filed dir1 dir2
dir1
dir2
Where the first 2 rows are the result of ls and the last 2 rows are the result of the grep command.
How do i do that?
ls |tee /dev/tty |grep dir
will do that, although it won't put a space between the two parts.
I'm cheating a bit there, but can't you just do both commands one after another?
like (adding an echo to put some space between them):
ls && echo && ls | grep dir
The simplest way is to run ls twice:
ls; ls -1 | grep dir
Please pay attention -1 option. grep is line oriented that's why ls should print one dir entry per line.
I have a folder called .dir. When I type ls -al | grep .dir it finds it and shows me the result, but when I type ls -al | grep *dir or (*i*, *d*) it shows nothing. I can not understand why it does not find any matches.
When you do grep *dir two things can happen:
If there are any non-hidden file ending in dir say for example (foodir and bardir), the command will be expanded to grep foodir bardir and it will not do what you expect. Moreover the behavior will differ depending on there was just one file or more than one...
There are no non-hidden files ending in dir, then *dir is not expanded but used as the regular expression in the grep call. But the asterisk in grep is used to repeat the last match any number of times, and since there is no previous character it stands for itself. Conclusion: grep is looking for "*dir", literally.
For the first problem just use quotes. For the second use .*dir (. stands for any character):
ls -la | grep ".*dir"
Or if you want to see just your directory:
ls -la | grep "\.dir"
Note that if you do not escape the . with a \ it show also files such as adir, _dir...
Unquoted asterisks are expanded before being passed to grep. That means if your directory is empty except for the .dir directory, the string passed to grep depends on the status of the dotglob and nullglob settings (in Bash; I don't know their Korn shell equivalents):
$ cd -- "$(mktemp -d)"
$ mkdir .dir
$ shopt -u dotglob nullglob
$ echo *dir
*dir
$ shopt -s nullglob
$ echo *dir
$ shopt -s dotglob
$ echo *dir
.dir
grep works with regular expressions, not globs. * is a valid glob, but not a valid regular expression without anything before it.