I am stuck with one issue, and being a starter in shell script its getting confusing to achieve it.
I need to pull the file name while unzipping a file (which contains different file with different paths) in shell.
Example:
/java/server/test/Class1.java
/java/server/Xml1.xml
Output
I should get below as output in some local variables:
/java/server/test/
/java/server/
Note: I am using unzip utility for this.
If you'e got the full filename including path in $Filename then
${Filename##*/}
will give the path and
${Filename%/*}
will give the filename.
You can done this using combination of sed.
unzip a.zip | sed 's/.*: \(.*\)\/.*$/\1/'
It will give you the directory names without file names. If you need a uniq value, then use uniq command with that.
unzip a.zip | sed 's/.*: \(.*\)\/.*$/\1/' | uniq
Just unzip the file normally
unzip myfolder.zip
and then recurse through the folder and get the filepaths:
find $(pwd)/myfolder -type d
This last command gets (find) all folders (-type d) in extracted directory (myfolder) in the current directory (pwd).
Then you can redirect its output to a file
find $(pwd)/myfolder -type d > dirnames
or store it in a variable:
DIRNAMES=$(find $(pwd)/myfolder -type d)
Related
I have a file named "file.txt" which contains several filenames like this :
filename1.ext
filename2.ext
filename3.ext
filename4.ext
I want to find these files through the directory /home (And so all the sub-directories of the directory etc etc..) [I assume I will have to use the find command from the shell], display the name of the file if the directory contains it and the associated path.
The file "file.txt" will be passed as an argument so the command should be ./find_path.sh file.txt
Thank you in advance for your help !
I would do this: build up an array holding the arguments you give to find
find_args=("-false")
while read -r file; do
find_args+=("-o" "-name" "$file");
done < "$1"
find /home "${find_args[#]}"
That gives you the flexibility to add further directives, for example if you want to change the output:
find /home "${find_args[#]}" -printf '%f\t%h\n' | sort -k 1,1
Terminal noob need a little help :)
I have a 98 row long filename list in a .csv file. For example:
name01; name03, etc.
I have an external hard drive with a lot of files in chaotic file
structure. BUT the file names are consistent, something like:
name01_xy; name01_zq; name02_xyz etc.
I would like to copy every file and directory from the external hard
drive which begins with the filename stored in the .csv file to my
computer.
So basically it's a search and copy based on a text file from an eHDD to my computer. I guess the easiest way to do is a Terminal command. Do you have any advice? Thanks in advance!
The task can be split into three: read search criteria from file; find files by criteria; copy found files. We discuss each one separately and combine them in a one-liner step-by-step:
Read search criteria from .csv file
Since your .csv file is pretty much just a text file with one criterion per line, it's pretty easy: just cat the file.
$ cat file.csv
bea001f001
bea003n001
bea007f005
bea008f006
bea009n003
Find files
We will use find. Example: you have a directory /Users/me/where/to/search and want to find all files in there whose names start with bea001f001:
$ find /Users/me/where/to/search -type f -name "bea001f001*"
If you want to find all files that end with bea001f001, move the star wildcard (zero-or-more) to the beginning of the search criterion:
$ find /Users/me/where/to/search -type f -name "*bea001f001"
Now you can already guess what the search criterion for all files containing the name bea001f001 would look like: "*bea001f001*".
We use -type f to tell find that we are interested only in finding files and not directories.
Combine reading and finding
We use xargs for passing the file contents to find a -name argument:
$ cat file.csv | xargs -I [] find /Users/me/where/to/search -type f -name "[]*"
/Users/me/where/to/search/bea001f001_xy
/Users/me/where/to/search/bea001f001_xyz
/Users/me/where/to/search/bea009n003_zq
Copy files
We use cp. It is pretty straightforward: cp file target will copy file to directory target (if it is a directory, or replace file named target).
Complete one-liner
We pass results from find to cp not by piping, but by using the -exec argument passed to find:
$ cat file.csv | xargs -I [] find /Users/me/where/to/search -type f -name "[]*" -exec cp {} /Users/me/where/to/copy \;
Sorry this is my first post here. In response to the comments above, only the last file is selected likely because the others have a carriage return \r. If you first append the directory to each filename in the csv, you can perform the move with the following command, which strips the \r.
cp `tr -d '\r' < file.csv` /your/target/directory
I have a question regarding the manipulation and creation of text files in the ubuntu terminal. I have a directory that contains several 1000 subdirectories. In each directory, there is a file with the extension stats.txt. I want to write a piece of code that will run from the parent directory, and create a file with the name of all the stats.txt files in the first column, and then returns to me all the information from the 5th line of the same stats.txt file in the next column. The 5th line of the stats.txt file is a sentence of six words, not a single value.
For reference, I have successfully used the sed command in combination with find and cat to make a file containing the 5th line from each stats.txt file. I then used the ls command to save a list of all my subdirectories. I assumed both files would be in alphabetical order of the subdirectories, and thus easy to merge, but I was wrong. The find and cat functions, or at least my implementation of them, resulted in a file that appeared to be random in order (see below). No need to try to remedy this code, I'm open to all solutions.
# loop through subdirectories and save the 5th line of stats.txt as a different file.
for f in ~/*; do [ -d $f ] && cd "$f" && sed -n 5p *stats.txt > final.stats.txt done;
# find the final.stats.txt files and save them as a single file
find ./ -name 'final.stats.txt' -exec cat {} \; > compiled.stats.txt
Maybe something like this can help you get on track:
find . -name "*stats.txt" -exec awk 'FNR==5{print FILENAME, $0}' '{}' + > compiled.stats
I have many files (File Format: ABC_YYYYMMDD.TXT) in my folder.
- ABC_20150101.TXT
- ABC_20150201.TXT
- ABC_20150301.TXT
- ABC_20150501.TXT
I need output as below.
- ABC_20150101.TXT - Moved to a folder name ARCHV in current path.
- ABC_20150201.TXT - Moved to a folder name ARCHV in current path.
- ABC_20150301.TXT - Moved to a folder name ARCHV in current path.
- ABC_20150501.TXT - Kept in the current path, since it is latest.
That is latest file kept in the current folder itself. But other files will be moved to another folder in the present working directory named /ARCHV.
Please let me know the UNIX statement do the task.
Thanks
Here is a quick solution, which relies on some installed programs:
$ find -maxdepth 1 -type f -iname 'ABC*.TXT' -printf '%T#|%p\n' | sort -r -n | tail -n +2 | cut -d'|' -f2 | xargs -i mv {} ARCHV
find prints the filenames with a preceeding unix timestamp
sort sorts them by timestamp
tail removes the first (most recent file)
cut takes the filenames only (removes the timestamp)
xargs mv moves the files
quick and dirty:
ls -1|awk 'p{printf "mv %s /archive\n",$0}{p=$0}'|sh
test this line under the directory containting those ABC_.... files
ls without any sorting options will sort the list by name.
pipe the result to the awk, skip the last line (file)
remove the ending |sh will see the output generated by the command. If everything is ok, add the |sh will make those commands get executed
I see your example file name doesn't have spaces. If they do contain spaces, change the mv %s into mv \"%s\"
The target archive in my one-liner was named as /archive, you can change it into the right one.
I have a directory with XML files and other directories. All other directories have XML files and subdirectories, etc.
I need to write a script (bash probably) that for each directory runs java XMLBeautifier directory and since my skills at bash scripting are a bit rubbish, I would really appreciate a bit of help.
If you have to get the directories, you can use:
$ find . -type d
just pipe this to your program like this:
$ find . -type d | xargs java XMLBeautifier
Another approach would be to get all the files with find and pipe that to your program like this:
$ find . -name "*.xml" | xargs java XMLBeautifier
This takes all .xml files from the current directory and recursively through all subdirectories. Then hands them one by one over with xargs to java XMLBeautifier.
Find is an awesome tool ... however, if you are not sure of the file name but have a vague idea of what those xml file contains then you can use grep.
For instance, if you know for sure that all your xml files contains a phrase "correct xml file" (you can change this phrase to what you feel appropriate) then run the following at your command line ...
grep -IRw "correct xml file" /path/to/directory/*
-I option searches the file and returns the file name when pattern is matched
-R option reaches your directory recursively
-w ensure that the pattern given matches on the whole and not single word individually
Hope this helps!