parsing and changing the files in all sub directories - bash

I would like to parse all the files *.c in the sub directories and prefix a string to the file name and place file in the same sub-directory.
For example, if there's a file in dir1/subdir1/test.c , I would like to change that file name to xyztest.c and place it in dir1/subdir1/. How to do that?
I would like to do in bash script.
Thanks,

What you need is:
Find all c files in a directory (use find command)
Separate the filname and dirname (use basename and dirname)
Move dirname/filename to dirname/prefix_filename
That should do it.

A find command with while loop should do that:
PREFIX=xyz;
while read line
do
path="$(dirname $line)"
base="$(basename $line)";
mv "${line}" "$path/${PREFIX}${base}"
done < <(find dir1 -name "*.c")

find dir -name '*.c' -printf 'mv "%p" "%h/xyz%f"\n' | sh
This will fail if you have file names with double quotes, or varous other shell metacharacters; but if you don't, it's a nice one-liner.

Related

Renaming multiple files in a nested structure

I have a directory with this structure:
root
|-dir1
| |-pred_20181231.csv
|
|-dir2
| |-pred_20181234.csv
...
|-dir84
|-pred_2018123256.csv
I want to run a command that will rename all the pred_XXX.csv files to pred.csv.
How can I easily achieve that?
I have looked into the rename facility but I do not understand the perl expression syntax.
EDIT: I tried with this code: rename -n 's/\training_*.csv$/\training_history.csv/' *.csv but it did not work
Try with this command:
find root -type f -name "*.csv" -exec perl-rename 's/_\d+(\.csv)/$1/g' '{}' \;
Options used:
-type f to specify file or directory.
-name "*.csv" to only match files with extension csv
-exec\-execdir to execute a command, in this case, perl-rename
's/_\d+(\.csv)/$1/g' search a string like _20181234.csv and replace it with .csv, $1 means first group found.
NOTE
Depending in your S.O. you could use just rename instead of perl-rename.
Use some shell looping:
for file in **/*.csv
do
echo mv "$(dirname "$file")/$(basename "$file")" "$(dirname "$file")/pred.csv"
done
On modern shells ** is a wildcard that matches multiple directories in a hierarchy, an alternative to find, which is a fine solution too. I'm not sure if this should instead be /**/*.csv or /root/**/*.csv based on tree you provided, so I've put echo before the 'mv' to see what it's about to do. After making sure this is going to do what you expect it to do, remove the echo.

Archive old files to different files

In past I have used following command to archive old files to one file
find . -mtime -1 | xargs tar -cvzf archive.tar
Now suppose we have 20 directories I need to make a script that goes in each directory and archives all files to different files which have same name as original file?
So suppose if I have following files in one directory named /Home/basic/
and this directory has following files:
first_file.txt
second_file.txt
third_file.txt
Now after I am done running the script I need output as follows:
first_file_05112014.tar
second_file_05112014.tar
third_file_05112014.tar
Use:
find . -type f -mtime -1 | xargs -I file tar -cvzf file.tar.gz file
I added .gz to indicate its zipped as well.
From man xargs:
-I replace-str
Replace occurrences of replace-str in the initial-arguments with names
read from standard input. Also, unquoted blanks do not terminate input
items; instead the separator is the newline character. Implies -x and -L 1.
The find command will produce a list of filepaths. -L 1 means that each whole line will serve as input to the command.
-I file will assign the filepath to file and then each occurrence of file in the tar command line will be replaced by its value, that is, the filepath.
So, for ex, if find produces a filepath ./somedir/abc.txt, the corresponding tar command will look like:
tar -czvf ./somedir/abc.txt.tar.gz ./somedir/abc.txt
which is what is desired. And this will happen for each filepath.
What about this shell script?
#!/bin/sh
mkdir /tmp/junk #Easy for me to clean up!
for p in `find . -mtime -1 -type f`
do
dir=`dirname "$p"`
file=`basename "$p"`
tar cvf /tmp/junk/${file}.tar $p
done
It uses the basename command to extract the name of the file and the dirname command to extract the name of the directory. I don't actually use the directory but I left it in there in case you might find it handy.
I put all the tar files in one place so I could delete them easily but you could easily substitute $P instead of $file if you wanted them in the same directory.

Bash Script for listing subdirectories and files in textfile

I need a Script that writes the directory and subdirectory in a text-file.
For example the script lies in /Mainfolder and in this folder are four other folders. Each contains several files.
Now I would like the script to write the path of each file in the textfile.
Subfolder1/File1.dat
Subfolder1/File2.dat
Subfolder2/File1.dat
Subfolder3/File1.dat
Subfolder4/File1.dat
Subfolder4/File2.dat
Important is that there is no slash in front of the listing.
Use the find command:
find Mainfolder > outputfile
and if you only want the files listed, do
find Mainfolder -type f > outputfile
You can also strip the leading ./ if you search the current directory, with the %P format option:
find . -type f -printf '%P\n' > outputfile
If your bash version is high enough, you can do it like that:
#!/bin/bash
shopt -s globstar
echo ** > yourtextfile
This solution assumes that the subdirectories contain only files -- they do not contain any directory in turn.
find . -type f -print | sed 's|^.*/S|S|'
I have created a single file in each of the four subdirectories. The original output is:
./Subfolder1/File1.dat
./Subfolder4/File4.dat
./Subfolder2/File2.dat
./Subfolder3/File3.dat
The filtered output is:
Subfolder1/File1.dat
Subfolder4/File4.dat
Subfolder2/File2.dat
Subfolder3/File3.dat
You can use this find with -exec:
find . -type f -exec bash -c 'f="{}"; echo "${f:2}"' \;
This will print all files starting from current paths by removing ./ from front.

How to use find to bundle files

I'm struggling with this task:
Write a script that takes as input a directory (path) name and a
filename base (such as ".", "*.txt", etc). The script shall search the
given directory tree, find all files matching the given filename, and
bundle them into a single file. Executing the given file as a script
should return the original files.
Can anyone help me?
First i tried to do the find part like this:
#!/bin/bash
filebase=$2
path=$1
find $path \( -name $base \)
Then i found this code for bundle, but I dont know how to combine them.
for i in $#; do
echo "echo unpacking file $i"
echo "cat > $i <<EOF"
cat $i
echo "EOF"
done
Going on tripleee's comment you can use shar to generate a self extracting archive.
You can take the output of find and pass it through to shar in order to generate the archive.
#!/bin/bash
path="$1"
filebase="$2"
archive="$3"
find "$path" -type f -name "$filebase" | xargs shar > "$archive"
The -type f option passed to find will restrict the search to files (i.e. excludes directories), which seems to be a required limitation.
If the above script is called archive_script.sh, and is executable, then you can call it as below for example:
./archive_script.sh /etc '*.txt' etc-text.shar
This will create a shar archive of all the .txt files in /etc.

How do I grab the filename of the file containing a certain string when there are hundreds of files?

I have a folder with 200 files in it. We can say that the files are named "abc0" to "abc199". Five of these files contain the string "ez123" but I don't know which ones. My current attempt to find the file names of the files that contain the string is:
#!/bin/sh
while read FILES
do
cat $FILES | egrep "ez123"
done
I have a file that contains the filenames of all files in the directory. So I then execute:
./script < filenames
This is verifies for me that the files containing the string exist but I still don't have the name of the files. Are there any ideas concerning the best way to accomplish this?
Thanks
you can try
grep -l "ez123" abc*
find /directory -maxdepth 1 -type f -exec fgrep -l 'ez123' \{\} \;
(-maxdepth 1 is only necessary if you only want to search the directory and not the tree recursively (if there's any)).
fgrep is a bit faster than grep. -l lists the matched filenames only.
Try
find -type f -exec grep -qs "ez123" {} \; -print
This will use find to find all real files in current directory (and subdirectories), execute grep on them ({} will be replaced by file name, -qs tells it to be silent and just set an exit code), -print will print out the names of the files that grep found a matching line in.
What about:
xargs egrep -l ez123
That reads filenames from stdin and prints out the filenames with matches.

Resources