grep -R is returning a file called ..ThisFile. The double dot is not separated from the filename with a /.
I know all about ./ and ../ to signify "this directory" and "the directory above". What does this mean?
$ grep -R fish > grepresults
$ cat grepresults
..SomeFile
I looked in, above, and below the current directory for SomeFile, and I sure don't see it. Maybe I messed it. I don't know what to expect. I don't know what the .. is telling me.
It is a normal file. File and directory names are allowed to include dots, also at the beginning. Some tools, like ls, do however hide files and directories starting with a dot by default. The -a command line flag to ls disables this behavior.
Related
First I create 3 files:
$ touch alpha bravo carlos
Then I want to save the list to a file:
$ ls > info.txt
However, I always got my info.txt inside:
$ cat info.txt
alpha
bravo
carlos
info.txt
It looks like the redirection operator creates my info.txt first.
In this case, my question is. How can I save my list of files before creating the info.txt first?
The main question is about the redirection operator. Why does it act first, and how to delay it so I complete my task first? Using the example above to answer it.
When you redirect a command's output to a file, the shell opens a file handle to the destination file, then runs the command in a child process whose standard output is connected to this file handle. There is no way to change this order, but you can redirect to a file in a different directory if you don't want the ls output to include the new file.
ls >/tmp/info.txt
mv /tmp/info.txt ./
In a production script, you should make sure that the file name is unique and unpredictable.
t=$(mktemp -t lstemp.XXXXXXXXXX) || exit
trap 'rm -f "$t"' INT HUP
ls >"$t"
mv "$t" ./info.txt
Alternatively, capture the output into a variable, and then write that variable to a file.
files=$(ls)
echo "$files" >info.txt
As an aside, probably don't use ls in scripts. If you want a list of files in the current directory
printf '%s\n' *
does that.
One simple approach is to save your command output to a variable, like this:
ls_output="$(ls)"
and then write the value of that variable to the file, using any of these commands:
printf '%s\n' "$ls_output" > info.txt
cat <<< "$ls_output" > info.txt
echo "$ls_output" > info.txt
Some caveats with this approach:
Bash variables can't contain null bytes. If the output of the command includes a null byte, that byte and everything after it will be discarded.
In the specific case of ls, though, this shouldn't be an issue, because the output of ls should never contain a null byte.
$(...) removes trailing newlines. The above compensates for this by adding a newline while creating info.txt, but if the the command output ends with multiple newlines, then the above will effectively collapse them into a single newline.
In the specific case of ls, this could happen if a filename ends with a newline — very unusual, and unlikely to be intentional, but nonetheless possible.
Since the above adds a newline while creating info.txt, it will put a newline there even if the command output doesn't end with a newline.
In the specific case of ls, this shouldn't be an issue, because the output of ls should always end with a newline.
If you want to avoid the above issues, another approach is to save your command output to a temporary file in a different directory, and then move it to the right place; for example:
tmpfile="$(mktemp)"
ls > "$tmpfile"
mv -- "$tmpfile" info.txt
. . . which obviously has different caveats (e.g., it requires access to write to a different directory), but should work on most systems.
One way to do what you want is to exclude the info.txt file from the ls output.
If you can rename the list file to .info.txt then it's as simple as:
ls >.info.txt
ls doesn't list files whose names start with . by default.
If you can't rename the list file but you've got GNU ls then you can use:
ls --ignore=info.txt >info.txt
Failing that, you can use:
ls | grep -v '^info\.txt$' >info.txt
All of the above options have the advantage that you can safely run them after the list file has been created.
Another general approach is to capture the output of ls with one command and save it to the list file with a second command. As others have pointed out, temporary files and shell variables are two specific ways to capture the output. Another way, if you've got the moreutils package installed, is to use the sponge utility:
ls | sponge info.txt
Finally, note that you may not be able to reliably extract the list of files from info.txt if it contains plain ls output. See ParsingLs - Greg's Wiki for more information.
I have the following code below, which is an attempt to create a symbolic link for each file matching the pattern *let.txt in the working folder, which has the same name as the original file but with underscores instead of spaces. I need to keep the original files untouched hence the use of symlinks.
The error I get is
ln: failed to access ‘*let.txt’: Too many levels of symbolic links
So I see the search string is getting passed very literally into tempstring, I don't know why. How do I correct my code?
for file in *let.txt; do
tempstring="${file// /_}"
ln -s "$file" $tempstring
done
Try putting quotes around all your variables when 1 variable is 1 parameter:
for file in *let.txt; do
tempstring="${file// /_}"
ln -s "$file" "$tempstring"
done
Tested and works on GNU bash, version 5.0.11(1)-release (x86_64-apple-darwin18.6.0)
I'm a beginner in the terminal and bash language, so please be gentle and answer thoroughly. :)
I'm using Cygwin terminal.
I'm using the file command, which returns the file type, like:
$ file myfile1
myfile1: HTML document, ASCII text
Now, I have a directory called test, and I want to check the type of all files in it.
My endeavors:
I checked in the man page for file (man file), and I could see in the examples that you could type the names of all files after the command and it gives the types of all, like:
$ file myfile{1,2,3}
myfile1: HTML document, ASCII text
myfile2: gzip compressed data
myfile3: HTML document, ASCII text
But my files' names are random, so there's no specific pattern to follow.
I tried using the for loop, which I think is going to be the answer, but this didn't work:
$ for f in ls; do file $f; done
ls: cannot open `ls' (No such file or directory)
$ for f in ./; do file $f; done
./: directory
Any ideas?
Every Unix or Linux shell supports some kind of globs. In your case, all you need is to use * glob. This magic symbol represents all folders and files in the given path.
eg., file directory/*
Shell will substitute the glob with all matching files and directories in the given path. The resulting command that will actually get executed might be something like:
file directory/foo directory/bar directory/baz
You can use a combination of the find and xargs command.
For example:
find /your/directory/ | xargs file
HTH
file directory/*
Is probably the shortest simplest solution to fix your issue, but this is more of an answer as to why your loops weren't working.
for f in ls; do file $f; done
ls: cannot open `ls' (No such file or directory)
For this loop it is saying "for f in the directory or file 'ls' ; do..." If you wanted it to execute the ls command then you would need to do something like this
for f in `ls`; do file "$f"; done
But that wouldn't work correctly if any of the filenames contain whitespace. It is safer and more efficient to use the shell's builtin "globbing" like this
for f in *; do file "$f"; done
For this one there's an easy fix.
for f in ./; do file $f; done
./: directory
Currently, you're asking it to run the file command for the directory "./".
By changing it to " ./* " meaning, everything within the current directory (which is the same thing as just *).
for f in ./*; do file "$f"; done
Remember, double quote variables to prevent globbing and word splitting.
https://github.com/koalaman/shellcheck/wiki/SC2086
This is for the Apple platform. My end goal is to do a find and replace for a line inside of the firefox preference file "prefs.js" to turn off updates. I want to be able to do this for all accounts on the Mac, including the user template (didn't include that in the examples). So far I've been able to get a list of all the paths that have the prefs.js file with this:
find /Users -name prefs.js
I then put the old preference and new preference in variables:
oldPref='user_pref("app.update.enabled", false);'
newPref='user_pref("app.update.enabled", true);'
I then have a "for loop" with the sed command to replace the old preference with the new preference:
for prefs in `find /Users -name prefs.js`
do
sed "s/$oldPref/$newPref/g" "$prefs"
done
The problem I'm running into is that the "find" command returns the full paths with the stupid "Application Support" in the path name like this:
/Users/admin/Library/Application Support/Firefox/Profiles/437cwg3d.default/prefs.js
When the command runs, I get these errors:
sed: /Users/admin/Library/Application: No such file or directory
sed: Support/Firefox/Profiles/437cwg3d.default/prefs.js: No such file or directory
I'm assuming that I somehow need to get the "find" command to wrap the outputted path in quotes for the "sed" command to parse it correctly? I'm I on the right path? I've tried to pipe the find command into sed to wrap quotes, but I can't get anything to work correctly. Please let me know if I should go about this differently. Thank you.
You don't want to for prefs in ... on a list of files that are output from find. For a more complete explanation of why this is bad, see Greg's wiki page about parsing ls. You would only use a for loop in bash if you could match the files using a glob, which is difficult if you want to do it recursively.
It would be better, if you can swing it, to use find ... -exec ... instead. Perhaps something like:
find /Users -name prefs.js -exec sed -i.bak -e "s/$oldPref/$newPref/" {} \;
The sed command line is executed once for each file found by find. The {} gets replaced with the filename. Sed's -i option lets you run it in-place, rather than requiring stdin/stdout. Check the man page for usage details.
(Grain of salt: I'm basing this on my experience with linux)
I think it less to do with sed and more to do with the way the for loop array is formed. When the the results of find are converted to an array, the space between Application and Support is treated as a delimiter.
There are several ways to work around this, but the easiest is probably to change the IFS variable. The IFS variable is an internal variable that your command line interpreter uses to separate fields (more info). You can change the IFS variable of the environment before running the find command.
Modified example from here:
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for f in `find /Users -name prefs.js`
do
echo "$f"
done
# restore $IFS
IFS=$SAVEIFS
Run a recursive listing of all the
files in /var/log and redirect
standard output to a file called
lsout.txt in your home directory.
Complete this question WITHOUT leaving
your home directory.
An: ls -R /var/log/ >
/home/bqiu/lsout.txt
I reckon the above bash command is not correct. Because I found what it stores was :
$ ls -R /var/log
/var/log:
empty.txt setup.log setup.log.full tmp
/var/log/tmp:
fake.txt subfolder
/var/log/tmp/subfolder:
Does that mean problem resolved?
I reckon NOT.
Because it contains more "stuff" than "only files"
Or at least, if the purpose was to locate all "files" underneath the "/var/log" directory
recursively, then I hope to get the anwser like this:
/var/log/empty.txt
/var/log/setup.log
/var/log/setup.log.full
/var/log/tmp/fake.txt
So then someone can parse the content of the output for later use. Such like
$ perl -wnle 'print "$. :" , $_;' logfiles
1 :/var/log/empty.txt
2 :/var/log/setup.log
3 :/var/log/setup.log.full
4 :/var/log/tmp/fake.txt
This is what I've got so far:
$ ls -1R
.:
cal.sh
cokemachine.sh
dir
sort
test.sh
./dir:
afile.txt
file
subdir
./dir/subdir:
$ ls -R | sed s/^.*://g
cal.sh
cokemachine.sh
dir
sort
test.sh
afile.txt
file
subdir
But this still leaves all directory/sub-directory names (dir and subdir), plus a couple of empty newlines
How could I get the correct result without using Perl or awk? Preferably using only basic bash commands(this is just because Perl and awk is out of assessment scope)
Edited : I focused on my own "$HOME" folder just to restrict the file listed. I am having little content in my homedir
Edited 2nd: Sorry about my inapproprated question in the initial form. I fixed the wording and hopefully everyone can see the problem now.
Try -
find /var/log > ~/lsout.txt
If you were given no restrictions in terms of which commands can or cannot be used, ls -R /var/log >~/lsout.txt or find /var/log -print >"$HOME/lsout.txt" or any similar combination will work just fine.
However, if the point of the assignment is to write a 100% sh-based implementation, without using ls -R, find, etc. then you should be producing something along the lines of:
#!/bin/sh
# Helper method which recursively lists the contents of a given directory
# Usage: recurse_ls target_directory
recurse_ls()
{
TARGET_DIR="$1"
# list contents of $TARGET_DIR
...
# - recursive call to list contents of sub-directories
recurse_ls ...
...
}
# MAIN
# Usage: script.sh target_directory
# - check that parameters to script.sh are correct
...
# - list the contents of target_dir and its subdirectories
recurse_ls "$1"
Useful links:
variable expansion and parameter substitution
file type test operations
globbing (wildcard expansion)
quoting to account for blanks in variable values (including filenames)
I'd guess that the answer they want is:
ls -R /var/log/ > /home/bqiu/lsout.txt
ie. the original answer you said was wrong.
Except you may want to write it as:
ls -R /var/log/ > ~/lsout.txt.
That way it outputs to the home directory of whoever is logged in, rather than just user "bqiu".
When it says: Run a recursive listing of all the files
To me ls stands for listing and the -R option stands for recursive.
So to me the wording of the question suggests using ls -R to produce the listing,
But a it depends upon what format they want the listing.