I am using
ls | cut -c 5-
This does return a list of the file names in the format i want them, but doesn't actually perform the action. Please advise.
rename -n 's/.{5}(.*)/$1/' *
The -n is for simulating; remove it to get the actual result.
you can use the following command when you are in the folder where you want to make the renaming:
rename -n -v 's/^(.{5})//' *
-n is for no action and -v to show what will be the changes. if you are satisfied with the results you can remove both of them
rename 's/^(.{5})//' *
Something like this should work:
for x in *; do
echo mv $x `echo $x | cut -c 5-`
done
Note that this could be destructive, so run it this way first, and then remove the leading echo once you're confident that it does what you want.
Kebman's little code is nice for if you want to cut off the leading dot of hidden files and folders in the current dir, before 7zipping or zipping.
I put this in a bash script, but this is wat I mean:
for f in .*; do mv -v "$f" "${f:1}"; done # cut off the leading point of hidden files and dirs
7z a -pPASSWORD -mx=0 -mhe -t7z ${DESTINATION}.7z ${SOURCE} -x!7z_Compress_not_this_script_itself_*.sh # compress all files and dirs of current dir to one 7z-file, excluding the script itself.
zip and 7z can have trouble with hidden files at top level in the current dir.
Hidden files in the subdirs are accepted.
mydir/myfile = ok
mydir/.myfile = ok
.mydir/myfile = nok
.mydir/.myfile = nok
If you get an error message saying,
rename is not recognized as the name of a cmdlet
This might work for you,
get-childitem * | rename-item -newname { [string]($_.name).substring(5) }
Related
I have 600,000+ images in a directory. The filenames look like this:
1000000-0.jpeg
1000000-1.jpeg
1000000-2.jpeg
1000001-0.jpeg
1000002-0.jpeg
1000003-0.jpeg
The first number is a unique ID and the second number is an index.
{unique-id}-{index}.jpeg
How would I load the unique-id's in from a .CSV file and remove each file whose Unique ID matches the Unique ID's in the .CSV file?
The CSV file looks like this:
1000000
1000001
1000002
... or I can have it separated by semicolons like so (if necessary):
1000000;1000001;1000002
You can set the IFS variable to ; and loop over the values read into an array:
#! /bin/bash
while IFS=';' read -a ids ; do
for id in "${ids[#]}" ; do
rm $id-*.jpg
done
done < file.csv
Try running the script with echo rm ... first to verify it does what you want.
If there's exactly one ID per line, this will show you all matching file names:
ls | grep -f unique-ids.csv
If that list looks correct, you can delete the files with:
ls | grep -f unique-ids.csv | xargs rm
Caveat: This is a quick and dirty solution. It'll work if the file names are all named the way you say. Beware it could easily be tricked into deleting the wrong things by a clever attacker or a particularly hapless user.
You could use find and sed:
find dir -regextype posix-egrep \
-regex ".*($(sed 's/\;/|/g' ids.csv))-[0-9][0-9]*\.jpeg"
replace dir with your search directory, and ids.csv with your CVS file. To delete the files you could include -delete option.
I need to create a file that lists all the files in a folder into a text file, along with a comma and the number 15 after. For example
My folder has video.mp4, video2.mp4, picture1.jpg, picture2.jpg, picture3.png
I need the text file to read as follows:
video.mp4,15
video2.mp4,15
picture1.jpg,15
picture2.jpg,15
picture3.png,15
No spaces, just filename.ext,15 on each line. I am using a raspberry pi. I am aware that the command ls > filename.txt would put all the file names into a folder, but how would I get a ,15 after every line?
Thanks
bash one-liner:
for f in *; do echo "$f,15" >> filename.txt; done
To avoid opening the output file on each iteration you may redirect the entire output with > filename.txt:
for f in *; do echo "$f,15"; done > filename.txt
$ printf '%s,15\n' *
picture1.jpg,15
picture2.jpg,15
picture3.png,15
video.mp4,15
video2.mp4,15
This will work if those are the only files in the directory. The format specifier %s,15\n will be applied to each of printf's arguments (the names in the current directory) and they will be outputted with ,15 appended (and a newline).
If there are other files, then the following would work too, regardless of whether there are files called like this or not:
$ printf '%s,15\n' video.mp4 video2.mp4 picture1.jpg picture2.jpg "whatever this is"
video.mp4,15
video2.mp4,15
picture1.jpg,15
picture2.jpg,15
whatever this is,15
Or, on all MP4, PNG and JPEG files:
$ printf '%s,15\n' *.mp4 *.jpg *.png
video.mp4,15
video2.mp4,15
picture1.jpg,15
picture2.jpg,15
picture3.png,15
Then redirect this to a file with printf ...as above... >output.txt.
If you're using Bash, then this will not make use of any external utility, as printf is built into the shell.
You need to do something like this:
#!/bin/bash
for i in $(ls folder_name); do
echo $i",15" >> filename.txt;
done
It's possible to do this in one line, however, if you want to create a script, consider code readability in the long run.
Edit 1: better solution
As #CristianRamon-Cortes suggested in the comments below, you should not rely on the output of ls because of the problems explained in this discussion: why not parse ls. As such, here's how you should write the script instead:
#!/bin/bash
cd folder_name
for i in *; do
echo $i",15" >> filename.txt;
done
You can skip the part cd folder_name if you are already in the folder.
Edit 2: Enhanced solution:
As suggested by #kusalananda, you'd better do the redirection after done to avoid opening the file in each iteration of the for loop, so the script will look like this:
#!/bin/bash
cd folder_name
for i in *; do
echo $i",15";
done > filename.txt
Just 1 command line using 2 msr commands recusively (-r) search specific files:
msr -rp your-dir1,dir2,dirN -l -f "\.(mp4|jpg|png)$" -PAC | msr -t .+ -o '$0,15' -PIC > save-file.txt
If you want to sort by time, add --wt to first command like: msr --wt -l -rp your-dirs
Sort by size? Add --sz but only the prior one is effective if use both --sz and --wt.
If you want to exclude some directory, add like: --nd "^(test|garbage)$"
remove tail \r\n in save-file.txt : msr -p save-file.txt -S -t "\s+$" -o "" -R
See msr.exe / msr.gcc48 etc in my open project https://github.com/qualiu/msr tools directory.
A solution without a loop:
ls | xargs -i echo {},15 > filename.txt
I have a list of keywords in a txt file like this:
keyword1
keyword2
keyword3
I need to search all of my files - EXCEPT for HTML and CSS files - for these keywords.
The only thing I need to know is which of the keyword DON'T appear inside any of the files. I don't care about the ones that do or what files they are in. I simply need to know which of the keywords aren't in any of the files.
Everything I've looked up keeps coming back with results about how to find keywords and outputs the files they are in. I'm open to doing this through command line, Perl, or whatever is the easiest way to get it done.
It looks like these commands should work for finding files not containing my keywords:
grep -L "foo" *
or
ack -L "foo" *
But I don't know how to pull the keywords from my txt file or how to make it search all files except .html or .css
I'm running this on my server so I'm not really too concerned about how resource-intensive it is...
Since your description is not as complete, I will assume the followings:
HTML file has .html extension (Note: it could have .htm .HTM, .HTML
extension, I just assume on of them, please adapt the answer to suit
your situation)
CSS file has .css extension (Again, it may have .CSS extension)
You keywords can be easily put into a grep command, i.e. NOT having
special regular expression characters, for example "^" means starts
of a line match, "$" means end of a line match.
You are trying to search for files under the a folder and its
subfolder.
Assume your keyword file is ../keywordfile.txt. Note: Since current
folder search is assumed, your keywordfile.txt can not be placed in
current folder, otherwise, searching keywordfile.txt itself yields
all matches, and nothing will output (since every keyword matches)
Now a quick-and-dirty way to do it:
#!/bin/bash
TMP=/tmp/filelist$$.txt
find . -type f | grep -v ".html$" | grep -v ".css$" > $TMP
## Note: if you are search only current fold but not subfolders,
## add "-maxdepth 1" option to "find" command
while read keyword; do
if [ `while read file; do \
cat "$file"; \
done < $TMP | grep -c "$keyword"` -eq 0 ]; then \
echo "$keyword does not appear in any files."; \
fi; \
done < ../keywordfile.txt
Try this:
#!/bin/bash
keywordlist=$(cat keywordfile.txt | tr "\n" "\|")
for x in $(find . ! -name "*.html" ! -name "*.css" -type f)
do
if ! grep -qE "(${keywordlist%"|"})" $x
then
echo $x
fi
done
I have a few files that I want to copy and rename with the new file names generated by adding a fixed string to each of them.
E.g:
ls -ltr | tail -3
games.txt
files.sh
system.pl
Output should be:
games_my.txt
files_my.sh
system_my.pl
I am able to append at the end of file names but not before *.txt.
for i in `ls -ltr | tail -10`; do cp $i `echo $i\_my`;done
I am thinking if I am able to save the extension of each file by a simple cut as follows,
ext=cut -d'.' -f2
then I can append the same in the above for loop.
do cp $i `echo $i$ext\_my`;done
How do I achieve this?
You can use the following:
for file in *
do
name="${file%.*}"
extension="${file##*.}"
cp $file ${name}_my${extension}
done
Note that ${file%.*} returns the file name without extension, so that from hello.txt you get hello. By doing ${file%.*}_my.txt you then get from hello.txt -> hello_my.txt.
Regarding the extension, extension="${file##*.}" gets it. It is based on the question Extract filename and extension in bash.
If the shell variable expansion mechanisms provided by fedorqui's answer look too unreadable to you, you also can use the unix tool basename with a second argument to strip off the suffix:
for file in *.txt
do
cp -i "$file" "$(basename "$file" .txt)_my.txt"
done
Btw, in such cases I always propose to apply the -i option for cp to prevent any unwanted overwrites due to typing errors or similar.
It's also possible to use a direct replacement with shell methods:
cp -i "$file" "${file/.txt/_my.txt}"
The ways are numerous :)
Below, I am trying to find the latest version of a file that could be in multiple directories.
Example Directory:
~inventory/emails/2012/06/InventoryFeed-Activev2.csv 2012/06/05
~inventory/emails/2012/06/InventoryFeed-Activev1.csv 2012/06/03
~inventory/emails/2012/06/InventoryFeed-Activev.csv 2012/06/01
Heres the bash script:
#!/bin/bash
FILE = $(find ~/inventory/emails/ -name INVENTORYFEED-Active\*.csv | sort -n | tail -1)
#echo $FILE #For Testing
cp $FILE ~/inventory/Feed-active.csv;
The error I am getting is:
./inventory.sh: line 5: FILE: command not found
The script should copy the newest file as attempted above.
Two questions:
First, is this the best method to achive what I want?
Secondly, Whats wrong above?
It looks good, but you have spaces around the = sign. This won't work. Try:
#!/bin/bash
FILE=$(find ~/inventory/emails/ -name INVENTORYFEED-Active\*.csv | sort -n | tail -1)
#echo $FILE #For Testing
cp $FILE ~/inventory/Feed-active.csv;
... Whats wrong above?
Variable assignment. You are not supposed to put extra spaces around = sign. The following should work:
FILE=$(find ~/inventory/emails/ -name INVENTORYFEED-Active\*.csv | sort -n | tail -1)
... is this the best method to achive what I want?
Probably not. But the best way depends on many factors. Perhaps whoever writes those files, can put them in a right location in the first place. You can also check file modification time, but that could fail, too... So as long as it works for you, I'd say go for it :)