I have been working on this script to retrieve files from all the folders in my directory and trying to change their names to my desired output.
Before filename:
Folder\actors\character\hair\haircurly1.dds
After filename:
haircurly1.dds
I am working with over 12,000 textures with different names that I extracted from an archive. My extractor included the path to the folder where it extracted the files in each file name. For example, a file that should have been named haircurly1.dds was named Folder\actors\character\hair\haircurly1.dds during extraction.
cd ~/Desktop/MainFolder/Folder
find . -name '*\\*.dds' | awk -F\\ '{ print; print $NF; }'
This code retrieves every texture file that I am looking at containing backslashes (as I have already changed some of the files with other codes, however I want one that will change all of the files at once rather than me having to write specific codes for every folder for 12,000+ texture files)
I use print; and it sends me the file path:
./Folder\actors\character\hair\haircurly1.dds
I use print $NF; and it sends me the text after the awk separator:
\
haircurly1.dds
I would like every file name that this script runs through to be changed to the $NF output of the awk command. Anyone know how I can make my script change the file names to their $NF output?
Thank you
Your question isn't clear but it SOUNDS like all you want to do is:
for file in *\\*; do
mv -- "$file" "${file##*\\}"
done
If that's not all you want then edit your question to clarify your requirements.
Have your awk command format and print a "mv" command, and pipe the result to bash. The extra single-quoting ensures bash treats backslash as a normal char.
find . -name '*\\*.dds' | awk -F\\ '{print "mv '\''" $0 "'\'' " $NF}' | bash -x
hth
Related
I have a couple of files in a directory that are named like this;
1_38OE983729JKHKJV.csv
an integer followed by an ID (the Integer and ID are both unique).
I need to prepend this ID to every line of the file for each file in the folder to prepare the files for import to a database (and discard the integer part of the filename). The contents of the file look something like this:
BW;20015;11,45;0,49;41;174856;4103399
BA;25340;11,41;0,55;40;222161;4599779
BB;800;7,58;0,33;42;10559;239887
HE;6301;9,11;0,39;40;69191;1614302
.
.
.
Total;112613;9,33;0,43;40;1207387;25897426
The end result should look something like this:
38OE983729JKHKJV;BW;20015;11,45;0,49;41;174856;4103399
38OE983729JKHKJV;BA;25340;11,41;0,55;40;222161;4599779
38OE983729JKHKJV;BB;800;7,58;0,33;42;10559;239887
38OE983729JKHKJV;HE;6301;9,11;0,39;40;69191;1614302
.
.
.
38OE983729JKHKJV;Total;112613;9,33;0,43;40;1207387;25897426
Thanks for the help!
EDIT: Spelling and vocabular for clarity
Loop over the files with for, use parameter expansion to extract the id.
#!/bin/bash
for csv in *.csv ; do
prefix=${csv%_*}
id=${csv#*_}
id=${id%.csv}
sed -i~ "s/^/$id;/" "$csv"
done
If the ID can contain underscores, you might need to be more careful with the expansion.
With awk tool:
for f in *csv; do awk '{ fn=FILENAME; $0=substr(fn,index(fn,"_")+1,length(fn)-6)";"$0 }1' "$f" > tmp && mv tmp "$f"; done
fn=FILENAME - the filename
try following too in single awk and which will take care of the number of files which are getting opened during this operation too, so that we will avoid the error of maximum number of files opened.
awk 'FNR==1{close(val);val=FILENAME;split(FILENAME,a,"_");sub(/\..*/,"",a[2])} {print a[2]","$0}' *.csv
With GNU awk for inplace editing and gensub() all you need is:
awk -i inplace '{print gensub(/.*_(.*)\..*/,"\\1;",1,FILENAME) $0}' *.csv
No shell loops or anything else necessary, just that command.
I have a directory with bunch of csv files. I want to remove the duplicates lines from all the files.
I have tried awk solution but seems to be bit tedious to do it for each and every file.
awk '!x[$0]++' file.csv
Even if I will do
awk '!x[$0]++' *
I will lost the file names. Is there a way to remove duplicates from all the files using just one command or script.
Just to clarify
If there are 3 files in the directory, then the output should contain 3 files, each sorted independently. After running the command or script the same folder should contain 3 files each with unique entries.
for f in dir/*;
do awk '!a[$0]++' "$f" > "$f.uniq";
done
to overwrite the existing files change to: awk '!a[$0]++' "$f" > "$f.uniq" && mv "$f.uniq" "$f" after testing!
With GNU awk for "inplace" editing and automatic open/close management of output files:
awk -i inplace '!seen[FILENAME,$0]++' *.csv
This will create new files, with suffix .new, that have only unique lines:
gawk '!x[$0]++{print>(FILENAME".new")}' *.csv
How it works
!x[$0]++
This is a condition. It evaluates to true only the current line, $0, has not been seen before.
print >(FILENAME".new")
If the condition evaluates to true, then this print statement is executed. It writes the current line to a file whose name is the name of the current file, FILENAME, followed by the string .new.
I have splited a file into multiple text files using below command -
awk '{print $2 > $1"_npsc.txt"}' complete.txt
I want to store all the output generated text files to another directory. How I can achieve this ? Please help.
You could do something like:
awk '{print $2 > "path/to/directory"$1"_npsc.txt"}' complete.txt
Just make sure that you create the director first (and replace path/to/directory with a the path that you like)
Input
A file called input_file.csv, which has 7 columns, and n rows.
Example header and row:
Date Location Team1 Team2 Time Prize_$ Sport
2016 NY Raptors Gators 12pm $500 Soccer
Output
n files, where the rows in each new file are grouped based on their values in column 7 of the original file. Each file is named after that shared value from column 7. Note: each file will have the same header. (The script currently does this.)
Example: if 2 rows in the original file had golf as their value for column 7, they would be grouped together in a file called golf.csv. If 3 other rows shared soccer as their value for column 7, they would be found in soccer.csv.
An array that has the name of each generated file in it. This array lives outside of the scope of awk. (This is what I need help with.)
Example: Array = [golf.csv, soccer.csv]
Situation
The following script produces the desired output. However, I want to run another script on each of the newly generated files and I don't know how.
Question:
My idea is to store the names of each new file in an array. That way, I can loop through the array and do what I want to each file. The code below passes a variable called array into awk, but I don't know how to add the name of each file to the array.
#!/bin/bash
ARRAY=()
awk -v myarray="$ARRAY" -F"\",\"" 'NR==1 {header=$0}; NF>1 && NR>1 {if(! files[$7]) {print header >> ("" $7 ".csv"); files[$7]=1}; print $0 >> ("" $7 ".csv"); close("" $7 ".csv");}' input_file.csv
for i in "${ARRAY[#]}"
do
:
echo $i
done
Rather than struggling to get awk to fill your shell array variable, why not:
make sure that the *.csv files are created in a clean directory
use globbing to loop over all *.csv files in that directory?
awk -F'","' ... # your original Awk command
for i in *.csv # use globbing to loop over resulting *.csv files
do
:
echo $i
done
Just off the top of my head, untested because you haven't supplied very much sample data, what about this?
#!/usr/bin/awk -f
FNR==1 {
header=$0
next
}
! $7 in files {
files[$7]=sprintf("sport-%s.csv", $7)
print header > file
}
{
files[$7]=sprintf("sport-%s.csv", $7)
}
{
print > files[$7]
}
END {
printf("declare -a sportlist=( ")
for (sport in files) {
printf("\"%s\"", sport)
}
printf(" )\n");
}
The idea here is that we store sport names in the array files[], and build filenames out of that array. (You can format the filename inside sprintf() as you see fit.) We step through the file, adding a header line whenever we get a new sport with no recorded filename. Then for non-headers, print to the file based on the sport name.
For your second issue, exporting the array back to something outside of awk, the END block here will output a declare line which can be interpreted by bash. IF you feel lucky, you can eval this awk script inside command expansion, and the declare command will effectively be interpreted by your shell:
eval $(/path/to/awkscript inputfile.csv)
Or, if you subscribe to the school of thought that consiers eval to be evil, you can redirect the awk script's standard output to a temporary file which you source:
/path/to/awkscript inputfile.csv > /tmp/yadda.$$
. /tmp/yadda.$$
(Don't use this temp file, make a real one with mktemp or the like.)
There's no way for any program to modify the environment of the parent shell. Just have the awk script output the names of the files as standard output, and use command substitution to put them in an array.
filesArray=($(awk ... ))
If the files might have spaces in them, you need a different solution; assuming you're on bash 4, you can just be sure to print each file on a separate line and use readarray:
readarray filesArray < <( awk ... )
if the files might have newlines in them, too, then things get tricky...
if your file is not large, you can run another script to get the unique $7 elements, for example
$ awk 'NR>1&&!a[$7]++{print $7}' sports
will print the values, you can change it to your file name format as well, such as
$ awk 'NR>1&&!a[$7]++{print tolower($7)".csv"}' sports
this then can be piped to your other process, here for example to wc
$ awk ... sports | xargs wc
This will do what I THINK you want:
oIFS="$IFS"; IFS=$'\n'
array=( $(awk '{out=$7".csv"; print > out} !seen[out]++{print out}' input_file.csv) )
IFS="$oIFS"
If your input file really is comma-separated instead of space-separated as you show in the sample input in your question then adjust the awk script to suit (You might want to look at GNU awk and FPAT).
If you don't have GNU awk then you'll need to add a bit more code to close the open output files as you go.
The above will fail if you have file names that contain newlines but will be fine for blank chars or other white space.
I am having such of file that contains lines as below:
/folder/share/folder1
/folder/share/folder1/file.gz
/folder/share/folder2/11072012
/folder/share/folder2/11072012/file1.rar
I am trying to remove these lines:
/folder/share/folder1/
/folder/share/folder2/11072012
To get a final result the following:
/folder/share/folder2/11072012/file1.rar
/folder/share/folder1/file.gz
In other words, I am trying to keep only the path for files and not directories.
This
awk -F/ '$NF~/\./{print}'
splits input records on the character "/" using the command line switch -F
examines the last field of the input record $NF (where NF is the number of fields in the input record) to see if it DOES contain the character "." (the !~ operator)
if it matches, oputput the record.
Example
$ echo -e '/folder/share/folder.2/11072012
/folder/share/folder2/11072012/file1.rar' | mawk -F/ '$NF~/\./{print}'
/folder/share/folder2/11072012/file1.rar
$
NB: my microscript looks at . ONLY in the filename part of the full path.
Edit in my 1st post I reversed the logic, to print dotless files instead of dotted ones.
You could to use the find command to get only the file list
find <directory> -type f
With awk:
awk -F/ '$NF ~ /\./{print}' File
Set / as delimiter, check if last field ($NF) has . in it, if yes, print the line.
Text only result
sed -n 'H
$ {g
:cycle
s/\(\(\n\).*\)\(\(\2.*\)\{0,1\}\)\1/\3\1/g
t cycle
s/^\n//p
}' YourFile
Based on file name and folder name assuming that:
line that are inside other line are folder and uniq are file (could be completed by a OS file existence file on result)
line are sorted (at least between folder and file inside)
posix version so --posixon GNU sed